- author, Anthony Zurcher
- roll, North America Correspondent, BBC News
AI has amazing power to change the way we live, for better or worse — and experts don’t trust those in power to be prepared for what lies ahead.
In 2019, the non-profit research group OpenAI created software that is able to generate paragraphs from coherent text, perform elementary analysis, and understand the text without specific instructions.
Initially, OpenAI decided not to fully make its creation — called GPT-2 — available to the public. The fear was that bad guys would use it to generate massive amounts of misinformation and propaganda.
In a press release announcing the decision, the group described the program at the time as “extremely dangerous”.
Since then, three years have passed, and the ability of artificial intelligence has increased greatly.
Unlike the recent limited distribution, the new GPT-3 version was readily available in November 2022.
The ChatGPT interface derived from this programming was the service that generated thousands of news articles and social media posts as reporters and analysts tested its features – often with impressive results.
ChatGPT wrote stand-up comedy scripts in the style of late American comedian George Carlin about the bankruptcy of a Silicon Valley bank. He’s seen Christian theology, written poetry, and explained quantum physics to a kid like he’s rapper Snoop Dogg.
Other AI paradigms, such as Dall-E, have produced images so convincing that there has been controversy over their inclusion on art sites.
At least visually, machines have learned to be creative.
On March 14, OpenAI introduced the latest version of its GPT-4 software. The group claims to feature stronger limits against abuse. Early clients included Microsoft, Merrill Lynch and the government of Iceland.
The hottest topic at the South-Southwest Interactive Conference—a global gathering of policymakers, investors, and technology executives held in Austin, Texas—was the potential and power of artificial intelligence software.
‘For better or for worse’
Arati Prabhakar, director of the White House Office of Science and Technology Policy, said she was excited about the potential of artificial intelligence, but also sounded a warning.
“What we’re all seeing is the emergence of this very powerful technology. It’s a turning point,” she declared at the conference.
“All of history proves that this kind of technology, new and powerful, can and will be used for good and for bad.”
Austin Carson, founder of SeedAI, an AI policy advisory group, who was on the same committee, was a little more direct.
“If you haven’t completely lost your mind in six months [e soltou um palavrão]”I’ll buy you dinner,” he told the audience.
“Losing my mind” is a way to describe what might happen in the future.
Amy Webb, President of the Future Today Institute and Professor of Business Administration at New York University in the United States, tried to anticipate the possible consequences. According to her, AI can go in one of two directions over the next ten years.
In an optimistic scenario, AI development will focus on the public good, with a transparent system design, and individuals will have the ability to decide whether their publicly available information on the Internet will be included in the AI knowledge base.
In this vision, technology acts as a tool that makes life easier, and makes it more integrated, as artificial intelligence becomes available in consumer products that can anticipate user needs and help perform almost any task.
The other scenario conceived by Webb is disastrous. It involves less data privacy, more centralizing power in a few companies, and AI anticipating user needs but misunderstanding them or, at the very least, suppressing their choices.
She believes the optimistic scenario only has a 20% chance of happening.
Webb tells the BBC that the direction the technology takes depends very much on the degree of responsibility of the companies that develop it. Will they do so transparently, revealing and overseeing the sources from which chatbots — which scientists call language large models (LLMs) — extract their information from?
Another factor, she said, is whether the government — including federal regulators and Congress — can work quickly to create legal protections to guide technological developments and prevent misuse.
In this sense, governments’ experience with social networking companies – Facebook, Twitter, Google and others – is indicative. It is not an encouraging experience.
“What I heard in a lot of the conversations were concerns that there was no protective barrier,” Melanie Sobin, managing director of Future Today, told the South by Southwest conference.
“There is a feeling that something needs to be done.”
“And I think social media, as a lesson, is what sticks in people’s minds when they look at the rapid development of creative AI,” he added.
Fighting harassment and hate speech
In the United States, federal oversight of social media companies is based largely on the Communications Decency Act passed by Congress in 1996, as well as a short but powerful provision contained in Section 230 of the law.
The provision protects internet companies from being liable for user-generated content on their sites. He is responsible for creating a legal environment in which social media companies can thrive. But recently, it has also been accused of allowing those same companies to gain too much power and influence.
Right-wing politicians complain that the law allowed Life’s Google and Facebook users to censor or reduce visibility of conservative views. The left accuses companies of not doing enough to prevent the spread of hate speech and violent threats.
“We have an opportunity and a responsibility to recognize that hate speech breeds hateful actions,” said Jocelyn Benson, Michigan Secretary of State.
In December 2020, Benson’s home was the site of protests by armed supporters of Donald Trump, organized on Facebook, who were vying for the results of the 2020 presidential election.
She upheld anti-deceptive practices laws in her state that would hold social media companies liable for knowingly spreading harmful information.
Similar proposals have been made at the federal level and in other states, as well as legislation requiring social networking sites to provide greater protections for underage users, be more open about their content moderation policies, and take more proactive action to reduce online harassment.
Opinions differ, however, about the chances of these reforms succeeding. Big tech companies maintain entire teams of lobbyists in the US capital, Washington, and in state capitals. They also rely on bulging coffers to influence politicians with campaign donations.
“Despite overwhelming evidence of problems with Facebook and other social networking sites, it’s been 25 years,” says technology journalist Kara Swisher.
“We’ve been waiting for legislation from Congress to protect consumers, and they’ve abdicated their responsibility.”
The danger, says Swisher, lies in the fact that many of the companies that play a big role in social networking — Facebook, Google, Amazon, Apple and Microsoft — are now leaders in AI.
If Congress fails to successfully regulate social media, it will be difficult to act quickly to address concerns about what Swisher calls an “arms race” of artificial intelligence.
Comparisons between AI regulations and social media aren’t academic either. New AI technology could navigate the already turbulent waters of platforms like Facebook, YouTube and Twitter and turn them into a raging sea of misinformation, as it becomes increasingly difficult to distinguish between posts and real humans from fake – but completely disguised – AI-generated accounts.
Even if the government succeeds in passing new social media regulations, it may end up rendered useless if there is a massive influx of harmful content generated by AI.
Among the countless hearings at the South by Southwest conference was one entitled “How Congress [americano] It is building an AI policy from the ground up. After about 15 minutes of waiting, the organizers informed the audience that the panel had been canceled because the attendees had moved to the wrong place.
For anyone hoping to find indications of human competence in government, this incident was not at all encouraging.
“Hardcore beer fanatic. Falls down a lot. Professional coffee fan. Music ninja.”
More Stories
The Director of Ibict receives the Coordinator of CESU-PI – Brazilian Institute for Information in Science and Technology
A doctor who spreads fake news about breast cancer is registered with the CRM of Minas
The program offers scholarships to women in the field of science and technology