- The Bagel
- Posts
- Let’s Talk About Artificial Intelligence
Let’s Talk About Artificial Intelligence
Businesses are loving AI and its dominance appears to be on the horizon. So what do the experts have to say?
Getty
Two new reports about artificial intelligence have come out in recent weeks, one by academics and one by industry. Let’s go over the highlights and talk about how the technology may impact our lives.
Stanford AI Index Report:
Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI) published the sixth edition of its AI Index Report last week, highlighting several new industry developments and laying out the current state of AI.
Here are some of the key takeaways:
Industry has overtaken academia: Until 2014, academia had produced most of the world’s significant machine learning models (e.g., Oxford University’s Big Data Institute has developed and currently uses several machine learning models). Since then, industry has raced ahead. In 2022, businesses produced 32 significant machine learning models, while academia produced just three.
Incidents involving the misuse of AI are on the rise: Data tracked by the AI, Algorithmic, and Automation Incidents and Controversies (AIAAIC) Repository, one of the world’s most comprehensive initiatives tracking the ethical misuse of AI, the number of incidents and controversies involving the misuse of AI has increased by 26 times since 2012. The report cites the March 2022 deepfake video of Ukrainian President Volodymyr Zelenskyy surrendering as one of the most notable incidents of 2022.
Global private investment in AI fell in 2022, but businesses are definitely into the technology: Private investments in AI tech reached $91.9 billion in 2022, down from $125.4 billion in 2021. It’s the first year-over-year decline since 2013, though, in the last decade, AI investment has increased thirteenfold. Despite last year’s investment dip, the proportion of companies adopting AI has more than doubled since 2017, and AI-related job postings increased in every U.S. sector between 2021 and 2022.
Some researchers have concerns, but more see good things: The report cites a survey that found 36% of natural language processing (a branch of AI) researchers believe AI decisions could one day lead to a “nuclear-level catastrophe,” though double that amount (73%) believe it may soon lead to “revolutionary societal change.”
Lawmakers are beginning to look at AI: Across 127 countries included in the report’s legislative analysis, the number of bills passed into law containing the term “artificial intelligence” increased from one in 2016 to 37 in 2022.
Goldman Sachs Report:
Goldman Sachs also released a new report on the economic impact of generative AI (e.g., ChatGPT, DALL-E, etc.), particularly concerning labor markets. Below are some highlights.
Jobs will be affected: The report estimates around two-third of current jobs are exposed to some degree of AI automation, while around a quarter of all current jobs could be outright replaced. In all, the report predicts some 300 million jobs will eventually be taken over by automation.
New jobs will be created: The report predicts new occupations will emerge as AI is adopted, remedying some of the expected worker displacement. Goldman also thinks the combination of cost savings, new job creation, and increased productivity (more below) could lead to substantial economic growth.
Productivity will go up, boosting global GDP: The report estimates annual U.S. labor productivity growth could increase by around 1.5% in the decade following mass adoption of generative AI. The boost in productivity could eventually increase global gross domestic product by 7%.
Why it matters:
Taken together, the reports paint a fairly predictable picture: Businesses are loving AI, they’ll probably use it to replace some jobs, and it’ll likely boost productivity and increase overall economic growth. Other studies also show how beneficial artificial intelligence could be for society, with applications ranging from medicine to national security.
At the same time, coming up with a list of ways AI tools can be used for nefarious purposes is unsurprisingly easy, and thinking of the deepfake videos we may have to fight through one day should send a shiver down any sane person’s spine.
ABC News
Sam Altman (pictured above on the left), CEO of OpenAI, the company that created ChatGPT, told ABC News earlier this month that while he believes AI will reshape society for the better, he has concerns.
“We’ve got to be cautious here,” Altman said. "I'm particularly worried that these models could be used for large-scale disinformation. I’m worried that these models, now that they’re getting better at writing computer code, could be used for offensive cyberattacks."
So, while we should continue to tread, we should probably tread lightly.
Read this post on Medium here.