Elevate Business
Outcomes
Upskill Your
Workforce
Solve Complex
Problems

Revolutionize your business with Datalytix: AI-driven insights, drone and 360 monitoring, and innovative solutions like Optic and AVA for any industry.

What We Do

Advanced Analytics & Visualization Solutions

Leverage AI-driven intelligence to solve complex challenges, empower people, and win in your markets.

Analytic Process Automation (APA)

Automate and streamline resource-intensive processes. Put APA at the heart of your digital transformation.

Data Management Solutions

Simplify collection, preparation, integration, processing, analyses, and management of data enterprise-wide.

Location Intelligence

Harness the power of high-definition aerial maps that empower confident and informed choices

Industries

Construction & Transportation

Healthcare

Government

Who We Are

Datalytix is a team of passionate data and technology experts delivering AI/ML-driven solutions across construction, healthcare, and government sectors. We provide real-time construction monitoring, remote inspections, and safety risk mitigation; healthcare data integration, predictive analytics, and compliance frameworks; and fraud detection, system modernization, and survey platforms for government. Our tailored solutions help businesses optimize performance, mitigate risks, and drive innovation.

What’s your toughest business challenge?
Schedule a complimentary 30-minute consultation
with a business analytics expert.

Join with our global community

Get In Touch

Address

Washington DC Metro Area

Email

info@datalytixglobal.com

Send Us A Message

Engineers love it. Corporate Boards like it. Customers depend on it. Even POTUS recently signed an executive order on “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government.” Undoubtedly, AI has remarkable capabilities that enable us to solve complex macro to micro-level challenges. But as new AI applications and proofs of concept (POCs) move from test labs into full integration with enterprise systems, leaders must address AI-related governance questions sooner rather than later.

Adoption of AI technologies by private and public sector enterprises is well underway. PwC’s “AI predictions 2021” survey respondents say they already use AI applications to:

  • Manage fraud, waste, abuse, and cybersecurity threats
  • Improve AI ethics, explainability, and bias detection
  • Help employees make better decisions
  • Analyze scenarios using simulation modeling
  • Automate routine tasks

But Artificial intelligence (AI) driven decisions are only as good as the quality of governance over AI models and the data used to train them. Gartner predicts by 2023, up to 70% of commercial AI products that do not have transparent, ethical processes as part of a governance strategy, will be stopped due to public opposition or activism.

What is AI Governance?

AI governance is a framework that proposes how stakeholders can best safeguard the research, design, and use of machine learning (ML) algorithms and AI in decision-making. AI governance is not simply a matter of meeting compliance requirements. It requires attentive care and feeding and adequate oversight to ensure its equitable and ethical use. For example, AI algorithms already impact who does (or doesn’t) get a job interview, government benefits, credit, loans, and medical services. People create algorithms that, if left unscrutinized, ultimately reflect the characteristics of their makers’ biases – and the quality and timeliness of the data they ingest. Because these mathematical constructs and historical data cannot understand what is “equitable” by today’s standards, public and private sector leaders, algorithm designers, technologists, and citizens have an ethical and professional responsibility to screen for bias and prevent, monitor, and mitigate “drift and bias” in AI/ML models. Results from biased models and data sets– whether the bias is conscious or unconscious – can adversely impact individuals, especially government agency missions and programs, businesses, and society.

In AI We Trust…Not So Fast

AI solutions to address problems and modernize systems can positively and negatively impact society in massive, untold ways. How does one trust an AI model to produce a “correct” answer? The question is, correct according to whom or which model or algorithm and in what context? Without FDA-like oversight authority, how can government AI models equitably reflect and be, as Abraham Lincoln famously said, “of the people, by the people, for the people”? To demonstrate credibility to business and public sector leaders and the public, AI models must consistently deliver trustworthy responses. Much of AI models’ bias comes from the data used for training the algorithms. Training can be supervised or unsupervised, and each machine learning approach can reflect bias if proper governance practices are not in place. For example, facial recognition models are typically trained to recognize light-skinned people of European descent instead of darker-skinned people. This bias means the model is more likely to produce accurate results for light-colored skin and false positives for people with dark-colored skin who are under-represented in facial recognition data. Consider these other publicly known incidents of bias in AI models:

  • Amazon reigned in its recruitment bot due to its sexist hiring algorithm.
  • Google has been hit hard by negative publicity related to Timnit Gebru and Margaret Mitchell’s publicly voiced concerns about bias in their models and toxicity in Google’s AI systems.
  • Steve Wozniak, inventor of the Apple-1 computer with Steve Jobs, got 10x the credit limit his wife did for Apple’s credit card due to a biased algorithm.

AI initiatives shaped early by a well-thought-out, effective governance strategy reduces the risk and costs associated with fixing foundational bias later.

Critical Principles of AI Governance

With a strong AI governance plan in place, organizations using AI/ML can prevent reputational damage, wasted investments on inherently biased models, and poor or inaccurate results. Here is an introduction to six fundamental principles of good AI governance to follow: explainability, transparency, interpretability, fairness, privacy and security, and accountability. Look for future blogs that cover these AI governance principles in more detail.

  • Explainability or explainable AI (XAI) means the methods, techniques, and results (e.g., classification) from an AI solution must be articulated in understandably human terms. A human should understand what actions an AI model took, are taking, and will be taking to generate a decision or results. A human should understand these actions to confirm existing knowledge, challenge knowledge, and adjust assumptions used in the model to mitigate bias. XAI uses “white-box” machine learning (ML) models that generate results easily understood by domain experts. Alternatively, “Black box” ML models are opaque, and their complexity makes them hard to understand, let alone explain or trust.
  • Transparency is the AI model designer’s ability to describe the data extraction parameters for training data and the processes applied to that data in easy-to-understand language.
  • Interpretability means humans must be able to translate and clearly convey the basis for decision-making in the AI model.
  • Fairness ensures AI systems treat all people fairly and are not biased toward any specific group.
  • Privacy and Security should protect people’s privacy and produce results without posing security risks.
  • Accountability ensures AI systems and their makers are held accountable for what they produce.
  • Be beneficial means AI should contribute to society and people in positive ways.

Critical AI Governance Questions to Answer

By 2024, Gartner predicts 60% of AI providers will include a means to mitigate possible harm as part of their technologies. Similarly, by 2023, analysts predict all personnel hired for AI development and training work will have to demonstrate expertise in the responsible development of AI. These predictions suggest enterprises would be wise to invest in AI governance planning now to answer these critical questions about results from and decision-making parameters of AI applications:

  • Who is accountable? If there are biases or mistakes found in an AI model, who is to correct them, and by what standards are they to use? What consequences are there, and for whom?
  • How does AI align with your strategy? Consider where AI is essential to or can enhance your business or mission success. For example, could AI improve and automate operational processes to gain efficiency? Are network threats recognized early enough to prevent downtime? Are you alerted as to when machinery will fail so you can fix it before there is an impact on your mission or business?
  • What processes could be modified or automated to improve the AI results?
  • What governance, performance, and security controls are needed to flag faulty AI models?
  • Are the AI models’ results consistent, unbiased, and reproducible

If you need help answering these or other AI governance or AI/ML solution questions or are looking for a solution to protect your investment in complex enterprise systems with AI algorithms embedded, contact a Datalytix AI Governance expert to discuss your needs.

A Pathway to Better Outcomes and an Upskilled Workforce

It is no secret that artificial intelligence (AI) and machine learning (ML) technologies can unearth buried “treasure” hidden in volumes of unstructured and structured data, including images. According to McKinsey Global Institute’s research, as of 2030, 70% of companies will implement at least one of five types of AI: computer vision, natural language, virtual assistants, robotic process automation (RPA), and advanced ML. Government agencies and large enterprises already use AI/ML to detect fraud, waste, and abuse in unemployment, healthcare, and other types of publicly funded benefit programs. Recently, a growing number of enterprises are investing in AI to engage in cutting-edge image analytics.

Diagnose Patients & Detect Disease Earlier

AI-powered image analysis techniques and tools can provide new pathways to better healthcare, economic, and mission outcomes. In healthcare, for example, radiologists were early adopters of AI-driven image analysis technology; it helps them diagnose patients more accurately and detect disease earlier. Using AI for image analysis, radiologists can detect and diagnose pre-cancerous lesions, early-stage brain tumors, small abnormalities in mammography screenings, and internal bleeding – that can be invisible to the human eye.

Convolutional Neural Networks (NN) for Imaging

Researchers apply convolutional neural networks (CNN) – another AI technique – to analyze more common forms of cardiovascular imaging, like EKGs, to detect heart disease earlier and reduce the need for risky and expensive open-heart surgery. Deep learning algorithms augment MRI data by assessing texture, volume, and shape to enhance cancer diagnosis, often eliminating the need for biopsies. Microsoft’s promising InnerEye research project uses advanced ML to automate quantitative analysis of 3D medical images. According to Project InnerEye, their goal is to “democratize AI for medical image analysis and empower researchers, hospitals, life science organizations, and healthcare providers to build medical imaging AI models using Microsoft Azure.”

Automated Image Processing

Meanwhile, the U.S. Food and Drug Administration (FDA) is accelerating its review and approval of technology that augments diagnostic image reviews and automates image processing. Another benefit of this approach to image analysis is that it offers the chance to upskill a more diverse workforce for future jobs. To reduce the constant backlog of imaging-related tasks, the FDA recently cleared IDx-DR. This AI-enabled device upskills non-traditionally trained personnel to take high-quality images to detect diabetic retinopathy.

At Datalytix, we apply proprietary AI/ML technology to analyze satellite images, enhance situational awareness and detect pandemic-prone pathogens, such as COVID-19, across a range of imagery using proprietary techniques. We specialize in the design, development, and implementation of innovative, advanced analytics solutions to deliver insights that shape a future filled with better outcomes – for patients, scientists, service providers, and people from all walks of life.

Contact Datalytix to learn how we can help you solve your business challenges using advanced analytics solutions.

Having a sound AI governance strategy has become imperative. Our previous blog discussed what AI governance is, why we need it, and the fundamentals of. Here we lay out three steps you can take to develop an AI governance strategy to stay competitive in a global economy, build the public’s trust in AI, and accelerate AI adoption in your organization.

STEP 1: Build a compelling business case for AI in your organization.

Business leaders typically evaluate a technology investment in terms of its return on investment (ROI). Investments in AI governance are no different. As a result, business decision-makers will want to present a compelling business case for how an AI governance strategy will reduce risk, improve growth and profitability, and facilitate goal achievement. Data scientists and technical experts who understand the power and benefits of AI in technical terms will want to show how an investment in AI governance benefits the bottom line. Therefore, AI governance leads, including technical experts, must clearly understand and define the critical problem(s) to solve with their AI governance strategy. They must also accurately assess the time and resources needed to design, stand up, and support a solid AI governance program that yields positive results for the business. Any assessment must also define and quantify the potential impacts, including risks and costs of NOT having a plan.

To make your case for funding AI governance in your organization, start with the following:

  • Outline the scope of the AI governance strategy. Focus on realistic business outcomes you can commit to, potential opportunities, and your strategic priorities. Stay focused on the organization’s identified AI-related pain points, but make sure you can scale your AI governance strategy to the enterprise level as the use of AI in embedded systems expands.
  • Define and detail desired outcomes. Estimate the value AI governance can bring to your business through better short and long-term risk management. Develop a realistic roadmap for implementing a standards-compliant governance program. Select an industry-standard framework and approach and best practices, and accurately estimate the cost of time and all resources required for your AI governance effort to succeed and be sustainable.

STEP 2: Your AI governance strategy should guide the development of AI embedded in applications, hybrid-cloud services, and solutions used across your ecosystem.

  • Document how your organization will adhere to principles of responsible AI. List actions to ensure the fundamentals for good AI governance – transparency, interpretability, ethics, privacy, trusted autonomy, explainability – are met.
  • Establish diversity requirements for your AI experts and contracted resources that mitigate the risk of bias in your data and AI models. Consider hiring a cross-functional team of subject matter experts, data scientists, technical experts, and analysts from diverse backgrounds, cultures, genders, ethnicities, age ranges, and more, who embody differing worldviews, perspectives, experiences, industries, and approaches to problem-solving.
  • Define diversity requirements for algorithms, data sources, and data sets. Diverse algorithms fed with clean and varied data from trusted sources produce more ethical and accurate AI outcomes – and reduce risks.
  • Thoughfully select best practices, governance frameworks, processes, and standards to guide AI users that are anchored around trust, transparency, and diversity.
  • Develop robust policies and procedures that follow industry standards for using AI. Start with a few fundamental principles for using AI, then work toward principles that help your organization meet legal/compliance obligations and align with your organization’s core values.

Policies and procedures should meet these standards:

  •  Be intentional and compliant with your organization’s values related to safety, security, accuracy, and reliability
  •  Ensure the benefits of AI outweigh the risks associated with using it; utilize a valid cost-benefit analysis
  •  Be transparent and disclose usage of AI in your applications to stakeholders
  •  Establish tools, technologies, and roles to monitor and audit the results
  •  Thoroughly document all algorithms used in your systems and applications and all changes, including   identities of team members who created the algorithm and made subsequent changes
  •  Ensure decisions using AI/ML can be explained with supporting data lineage and document the process for evaluating data quality, risks of bias, etc.
  •  Ensure AI models are not vulnerable to malicious manipulation or exploitation
  •  Monitor AI applications for inconsistencies and routinely test them against AI governance principles.
  •  Make your principles and plan available to your stakeholders.

STEP 3: Get guidance from an experienced AI governance expert.

Governance, explainability, and traceability requirements will likely differ for each AI application, as will security, privacy, and transparency points of view. Reveal offers the following advice:

  • Consult with an experienced, technology-agnostic vendor that specializes in conducting unbiased AI governance assessments. Test your strategy and AI governance program as you would test for security or compliance gaps related to frameworks
  • Ask if your vendor can securely monitor AI and ML models for performance, drifts, and the ability to interact with all possible data types at any level of volume without impacting performance.
  • Ask what real-time and historical reporting and auditing capabilities are available and how customizable they are.

Datalytix regularly helps our clients develop AI governance strategies and advanced analytics solutions. Ask us how an AI Governance expert from Datalytix if you need advice or help with the following:

  • Prepare an inventory of AI applications and use cases – from proof of concept through production.
  • Develop a plan to ensure AI principles are applied in your organization, and that those not meeting pre-defined AI principles are retired.
  • Develop a business case showing the benefits of an AI governance strategy.
  • Define human roles and responsibilities to monitor (input, output, training data, etc.), analyze, and audit models for any biases/drifts.
  • Identify and train AI experts within your organization to take AI governance work forward.
  • Apply the governance strategy to innovative AI, advanced analytics, and data management technologies that solve current challenges.
Scroll to Top

Thank You so much