In his recent, pointed testimony to Congress, the CEO of the company that created ChatGPT lobbied the federal government for regulations and tighter restrictions around artificial intelligence systems.
“What we need at this pivotal moment is clear, reasonable policy and sound guardrails,” Altman said. “These guardrails should be matched with meaningful steps by the business community to do their part and achieve the best outcomes for their customers. This should be an issue where Congress and the business community work together to get this right for the American people.”
It is rare that a tech CEO proactively solicits governmental tech regulation. And this is especially rare from a company leader experiencing meteoric growth such as ChatGPT. You would expect someone in Altman’s position to fight limitations and guardrails.
To be clear, this is a watershed moment. Privacy, accuracy, bias and abuse are all potential factors that could lead to catastrophic results, given the current state of generative AI. Altman’s plea recognizes the technology’s tremendous promise while acknowledging flaws that create serious problems for consumers and businesses.
For example, consider the damaging aftermath of a law professor wrongly accused of sexual assault when a ChatGPT search yielded incorrect information about him. The falsely accused professor had no recourse to have such information removed.
Or consider the group of students at Texas A&M who were temporarily denied their diplomas after a professor erroneously concluded that their assignments had been plagiarized after running their work through ChatGPT.
AI bias is also an issue. The algorithm Apple used to determine creditworthiness for its new credit card, Apple Card, gave one male applicant 20 times the credit his wife received, despite his wife having a higher credit score. Apple co-founder Steve Wozniak acknowledged the issue, even admitting that he got 10 times more credit than his wife.
There are many other examples of potentially life-altering outcomes in cases where generative AI gets it wrong. So what should be done, and who should do it?
Altman has the right idea. It should be a group effort involving legislative guardrails and more responsible use of AI by businesses.
Read Full Story
The Hill Rating
Discover more from News Facts Network
Subscribe to get the latest posts sent to your email.