Putting the AI Genie Back in the Bottle?

Much has been said and written about the meteoric rise of AI over the past twelve months or so. That includes our own blog, too: in the last two quarters, we’ve covered AI governance , generative AI security questions , and even ChatGPT’s one-year anniversary . 



Coverage is so thorough that AI—generative AI specifically—has now passed the peak of Gartner’s Hype Cycle and is starting to feel more than a little overplayed. That said, the metaphorical AI Genie is well and truly out of the bottle with concerns rising almost as fast as the AI technology updates that seem upon us nearly every day. Where does this put the data and analytics industry? Or society as a whole? Is it even possible to put the AI Genie back in the bottle, and if we could, would we want to? 



 



Unleashing the AI Genie—responsibly 



Like many in the industry, Domo is focusing much of our efforts on AI capabilities within our platform , and ensuring that those capabilities make sense and deliver value to our customers. We’re working hard to ensure that data, as the foundational AI asset, is managed and ready to deliver on AI’s promises.



Likewise, we’re taking a pragmatic approach to the expanding portfolio of available AI models by providing agnostic management tooling and integration. This is our top priority in today’s data and analytics landscape. 



Technology and business use cases aside, the other area that is abuzz with AI readiness are the regulatory bodies. I’ve had the pleasure of working with Australian Universities and Federal Government agencies on policy and guidance around “responsible” AI .



The breakneck speed of AI development has created a strong sense of urgency among regulators to both understand potential risks with AI and to develop mitigating strategies. As most will appreciate this is somewhat of a thankless task, with regulators being damned if they do and damned if they don’t. It also appears the AI Genie is relishing its time out of the bottle and is showing no signs of wanting to get back in. 



 



Regulating the AI Genie—top two concerns 



Regulation can take many forms, ranging from outright prohibition through to recommendations and guidelines, with varying degrees of enforcement. Key concerns at present are falling into two camps: 



The technology itself, including the speed of development, data considerations, governance, infrastructure, and operating costs. The impact and potential risks to business and society, primarily from a legal perspective around bias and ethics, through to human accountability and unexplainable outcomes.







Compounding these concerns is the need to “get it right”—regulators rarely have the luxury of trial and error and are beholding to a wide range of interest groups, all of whom demand immediate responses. By definition, though, regulation needs to be conservative (patient, even?) so as not to overreach or unnecessarily stifle growth and innovation. Normally this type of constraint is workable. However, the pace of AI development and adoption is driving new levels of urgency—including early propositions to “pause” AI altogether! 



So where does that leave us? Clearly there’s no way (or need) to put the AI Genie back in the bottle. However, now that the initial surge of AI hype is passing it is incumbent on the industry to develop a more nuanced response to AI’s possibilities.



While there is no shortage of innovation and commercial opportunity, we need to ensure that we do everything possible to minimise risks and drive productive, sustainable use cases . If we don’t, AI risks becoming a technology underachiever, and we risk squandering its potential.  The post Putting the AI Genie Back in the Bottle? first appeared on Blog .