Your Destination for Great Deals on Premium Quality Products and More

AI Is Fundamentally a “Labor Replacing Tool”

Welcome to AI This Week, Gizmodo’s weekly deep dive on what’s been happening in artificial intelligence.

For months, I’ve been harping on a particular point, which is that artificial intelligence tools—as they’re currently being deployed—are mostly good at one thing: Replacing human employees. The “AI revolution” has mostly been a corporate one, an insurrection against the rank-and-file that leverages new technologies to reduce a company’s overall headcount. The biggest sellers of AI have been very open about this—admitting time and again that new forms of automation will allow human jobs to be repurposed as software.

We got another dose of that this week, when the founder of Google’s DeepMind, Mustafa Suleyman, sat down for an interview with CNBC. Suleyman was in Davos, Switzerland, for the World Economic Forum’s annual get-together, where AI was reportedly the most popular topic of conversation. During his interview, Suleyman was asked by news anchor Rebecca Quirk whether AI was “going to replace humans in the workplace in massive amounts.”

The tech CEO’s answer was this: “I think in the long term—over many decades—we have to think very hard about how we integrate these tools because, left completely to the market…these are fundamentally labor replacing tools.”

And there it is. Suleyman makes this sound like some foggy future hypothetical but it’s obvious that said “labor replacement” is already happening. The tech and media industries—which are uniquely exposed to the threat of AI-related job losses—saw huge layoffs last year, right as AI was “coming online.” In only the first few weeks of January, well-established companies like Google, Amazon, YouTube, Salesforce, and others have announced more aggressive layoffs that have been explicitly linked to greater AI deployment.

The general consensus in corporate America seems to be that companies should use AI to operate leaner teams, the likes of which can be bolstered by small groups of AI-savvy professionals. These AI professionals will become an increasingly sought after class of worker, as they’ll offer the opportunity to reorganize corporate structures around automation, thus making them more “efficient.”

For companies, the benefits of this are obvious. You don’t have to pay a software program, nor do you have to supply it with health benefits. It won’t get pregnant and have to take six months off to care for its newborn child, nor will it ever become disgruntled with its working conditions and try to start a union drive in the break room.

The billionaires who are marketing this technology have made vague rhetorical gestures to things like universal basic income as a cure for the inevitable worker displacements that are going to happen, but only a fool would think those are anything other than empty promises designed to stave off some sort of underclass uprising. The truth is that AI is a technology that was made by and for the managers of the world. The frenzy in Davos this week—where the world’s wealthiest fawned over it like Greek peasants discovering Promethean fire—is only the latest reminder of that.

Photo: Stefan Wermuth/Bloomberg (Getty Images)

Question of the day: What’s OpenAI’s excuse for becoming a defense contractor?

The short answer to that question is: Not a very good one. This week, it was revealed that the influential AI organization was working with the Pentagon to develop new cybersecurity tools. OpenAI had previously promised not to join the defense industry. Now, after a quick edit to its terms of service, the billion dollar company is charging full-steam ahead with the development of new toys for the world’s most powerful military. After getting confronted about this pretty drastic pivot, the company’s response was basically: ¯\_(ツ)_/¯ …“Because we previously had what was essentially a blanket prohibition on military, many people thought that would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world,” a company spokesperson told Bloomberg. I’m not sure what the hell that means but it doesn’t sound particularly convincing. Of course, OpenAI is not alone. Many companies are currently rushing to market their AI services to the defense community. It only makes sense that a technology that has been referred to as the “most revolutionary technology” seen in decades would inevitably get sucked up into America’s military industrial complex. Given what other countries are already doing with AI, I’d imagine this is only the beginning.

More headlines this week

  • The FDA has approved a new AI-fueled device helps doctors hunt for signs of skin cancer. The Food and Drug Administration has given its approval to something called a DermaSensor, a unique hand-held device that doctors can use to scan patients for signs of skin cancer; the device leverages AI to conduct “rapid assessments” of skin legions and assess whether they look healthy or not. While there are a lot of dumb uses for AI floating around out there, experts contend that AI could actually prove quite useful in the medical field.
  • OpenAI is establishing ties to higher education. OpenAI has been trying to reach its tentacles into every strata of society and the latest sector to be breached is higher education. This week, the organization announced that it had forged a partnership with Arizona State University. As part of the partnership, ASU will get full-access to ChatGPT Enterprise, the company’s business-level version of the chatbot. ASU also plans to build a “personalized AI tutor” that students can use to assist them with their schoolwork. The university is also planning a “prompt engineering course” which, I am guessing, will help students learn how to ask a chatbot a question. Useful stuff!
  • The internet is already infested with AI-generated crap. A new report from 404 Media shows that Google is algorithmically boosting AI-generated content from a host of shady websites. Those websites, the report shows, are designed to hoover up content from other, legitimate websites and then repackage them using algorithms. The whole scheme revolves around automating content output to generate advertising revenue. This regurgitated crap is then getting promoted by Google’s News algorithm to appear in search results. Joseph Cox writes that the “presence of AI-generated content on Google News signals” how “Google may not be ready for moderating its News service in the age of consumer-access AI.”

Trending Products

0
Add to compare
Corsair 5000D Airflow Tempered Glass Mid-Tower ATX PC Case – Black

Corsair 5000D Airflow Tempered Glass Mid-Tower ATX PC Case – Black

$174.99
0
Add to compare
CORSAIR 7000D AIRFLOW Full-Tower ATX PC Case, Black

CORSAIR 7000D AIRFLOW Full-Tower ATX PC Case, Black

$269.99
.

We will be happy to hear your thoughts

Leave a reply

PartnaFirm
Logo
Register New Account
Compare items
  • Total (0)
Compare
0
Shopping cart