The last time state lawmakers entertained testimony on AI, there were mentions of the risks going forward, but most of the meeting dealt with ways artificial intelligence might improve state government services — helping rewrite outdated government code or handle phone calls from constituents, for example.
Thursday, the message policymakers heard sounded more like the pitch for a dystopian sci-fi movie.
In a brief but sobering presentation, cyber security expert Roman Yampolskiy said computer scientists are quickly approaching the development of what’s known as general AI, something less like a tool and more like an active agent. And whether researchers reach that goal in two to three years — the most confident estimate — or four to five, the consequences, he warned, are likely to be ultimately uncontrollable.
During questions, lawmakers fished around for some reassurance that an untethered artificial intelligence, or superintelligence, isn't a forgone conclusion, but Yampolskiy offered little in the way of comfort.
Pressed on whether there’s anything policymakers can do to head off potentially catastrophic scenarios down the line, Yampolskiy pointed to policies that would encourage companies to pump the breaks.
"If there is anything you can do to make incentives work and whoever does it slowest and safest wins, then we all possibly might win, but I am not optimistic that you can beat industry in terms of timelines and deployment," he responded.
Lexington Sen. Reggie Thomas, a Democrat, said he sees the problem as one that should be on the front burner for legislators at all levels.
"I think this is the most important issue that lawmakers across the country must address and deal with in 2024," he asserted.
Whether the concerns will translate to any kind of legislation or not, Yampolskiy’s message was less of a guide and more of a warning.