As the use of Artificial Intelligence expands across all industries and practically every facet of society, there is a growing demand for responsible AI governance.
Responsible AI is about ensuring that AI is utilized in a way that isn’t unethical, respects personal privacy, and avoids bias in general. There seems to be a constant stream of firms, technology, and researchers solving concerns related to ethical AI. AI Responsibility Labs (AIRL) has joined the fray, announcing $2 million in pre-seed funding and a preview launch of the company’s Mission Control software-as-a-service (SaaS) platform.
Ramsay Brown, the company’s CEO, had trained as a computational neuroscientist at the University of Southern California, where he spent a significant amount of time mapping the human brain. His initial company, Dopamine Labs, was renamed Boundless Mind and focused on behavioral engineering and how to use machine learning to forecast how individuals will behave. Thrive Global purchased Boundless Mind in 2019.
Brown and his colleagues at AIRL are tackling AI safety challenges, ensuring that the technology is utilized responsibly and without harming society or the companies that employ it.
Brown explained, “We formed the company and constructed the software platform for Mission Control to assist data science teams to execute their work better, more correctly, and faster.” “When we go around the responsible AI community, we see some individuals working on governance and compliance, but they aren’t talking to data science teams and finding out what genuinely hurts,” says the author.
What data science teams require to develop ethical Artificial Intelligence
Brown asserted strongly that no organization would intentionally design an AI that is biased and exploits data in an immoral manner.
Rather, data is mistakenly misused or machine learning models are trained on inadequate data in a complicated development with many moving parts and diverse personnel. When Brown and his team questioned data scientists about what was missing and what was hindering development efforts, respondents said project management software was more important than a compliance framework.
“That was our big a-ha,” he explained. “What teams actually missed was that they didn’t know what their teams were doing, not that they didn’t comprehend the regulations.”
Brown pointed out that dashboard technologies like Atlassian’s Jira transformed software engineering two decades ago, allowing engineers to produce software faster. Now, he hopes that AIRL’s Mission Control will serve as a data science dashboard, guiding data teams in the development of technologies that employ responsible AI methods.
Using existing AI and machine learning frameworks
Organizations can utilize a variety of technologies to manage AI and machine learning workflows today, which are commonly lumped together under the industry term of MLops. AWS Sagemaker, Google VertexAI, Domino Data Lab, and BigPanda are all popular technologies.
One of the things Brown’s organization has learned while developing its Mission Control service is that data science teams prefer to use a variety of technologies. He explained that AIRL isn’t trying to compete with MLops or other AI tools, but rather to serve as an overlay for responsible AI use. AIRL has created an open API endpoint that allows a Mission Control team to feed in data from any platform and have it end up as part of monitoring operations.
Mission Control from AIRL provides a framework for teams to take what they’ve been doing in ad hoc techniques and turn it into standardized machine learning and AI operations processes.
Mission Control, according to Brown, allows users to turn data science notebooks into repeatable processes and workflows that operate within set limits for responsible AI use. The data is linked to a monitoring system in this paradigm, which can notify an organization if policies are broken. For example, if a data scientist utilizes a data set that isn’t allowed by policy to be utilized for a specific machine learning operation, Mission Control can automatically detect this, raise a flag to management, and pause the workflow.