December 21, 2023 – Exactly one year ago, most people in the world of technology and the internet were talking about passing the Turing Test as if it were a distant future.
This “test” was originally called the Imitation Game by computer scientists. Alan Turing This experiment, conducted in 1950, is a hypothetical test of a machine's ability to exhibit intelligent behavior comparable to, or indistinguishable from, humans.
In 2023, and since the release of ChatGPT on November 30, 2022, the explosive economic, technological, and social power unleashed by OpenAI is making those days just 13 months ago seem quaint. Masu.
For example, users of large language models such as ChatGPT, Anthropic's Claude, and Meta's Llama interact with machines every day as if they were simply very smart people.
Yes, yes, knowledgeable users will agree that such chatbots use neural networks with very powerful predictive algorithms to come up with a probabilistic “next word” in the sequence initiated by the asker's question. I understand that I am simply using the network. And yes, users understand that such machines tend to “hallucinate” information that is not completely accurate or accurate at all.
That makes chatbots look a little more human-like.
Drama with OpenAI
At the Broadband Breakfast Live online event held on November 22, 2023, to celebrate the first anniversary of ChatGPT's launch, our expert panellists discussed the regulatory uncertainties left behind by massively accelerated artificial intelligence. focused on.
The event took place a few days later Sam AltmanOpenAI's CEO was fired, but he returned to the company with a new board of directors that Wednesday. The board members who replaced Mr. Altman (all but one) had clashed with Mr. Altman over the company's safety efforts.
More than 700 OpenAI employees subsequently signed a letter threatening to resign unless the board agrees to do so.
In other words, there's a policy perspective behind the battles on corporate boards, which was itself the big tech news of the year.
“this [was] Accelerationism and decelerationism.” adam tierera senior fellow at the R Street Institute, during the event.
Washington and the FCC wake up to AI
And it's not like Washington is turning a blind eye to the potentially life-changing implications of artificial intelligence, literally.
In October, the Biden administration issued an executive order on AI safety, which includes measures aimed at both ensuring safety and fostering innovation, and establishing safety and AI identification standards. It directs federal agencies to develop the technology and includes grants for researchers and small businesses looking to use the technology. .
But it's unclear which side lawmakers on Capitol Hill will take in the future.
One notable application of AI in communications highlighted by the FCC Commissioner jessica rosenworcel Spectrum sharing optimization powered by AI. Rosenworcel said at a hearing in July that AI-enabled radios can work together autonomously to enhance spectrum utilization without a central authority, and that this advancement is ready for implementation. Ta.
The potential contribution of AI to enhancing broadband mapping efforts was discussed at a House hearing in November. AI has faced skepticism from experts who argue that machine learning will struggle to identify potential inaccuracies in rural areas where data is scarce and of poor quality. Initially, the FCC believed that AI had strong potential to aid broadband mapping.
Also in November, the FCC voted to launch a formal investigation into the potential impact of AI on robocalls and robotexts. The agency believes it can combat illegal robocalls through AI, which can flag specific patterns that appear suspicious and analyze voice biometrics in synthesized voices.
But isn't ChatGPT a type of artificial general intelligence?
As we have focused on AI over the past year, the acclaimed concept of “artificial general intelligence” is far beyond passing the Turing test. This means it's probably a little smarter than ChatGPT-4.
Previously, OpenAI defined AGI as “an AI system that is typically smarter than humans.” But apparently the company recently redefined this to mean “a highly autonomous system that outperforms humans at the most economically valuable tasks.”
including some Rumman ChaudharyCEO of Humane Intelligence, a nonprofit organization responsible for technology, said that framing AGI in economic terms means that OpenAI has reoriented its mission to be about building things that sell, using intelligent AI systems. They argue that this is a far cry from their original vision of benefiting everyone.
ChatGPT-4 told reporters that AGI “refers to a machine's ability to understand, learn, and apply its intelligence to solve any problem, just like humans. It's advanced, but limited to tasks within its training and programming. It's great for language-based tasks, but it doesn't have the broad, adaptable intelligence that AGI implies.”
That sounds like what AGI-enabled machines want the world to believe.
Additional reporting for this story was provided by Jericho Casper.
See 12 Days of Broadband on Broadband Breakfast