- Artificial intelligence refers to the simulation of human intelligence in machines
- The goals of artificial intelligence include learning, reasoning, and perception.
- Deepfakes can sow doubt and discord.
- Large language models as disinformation force multipliers.
- The path to ethical, socially beneficial AI.
Artificial intelligence is also known as narrow AI or weak AI. It usually performs a narrow task. For example only facial recognition or only internet searches or only driving a car. However, the long-term goal of many researchers is to create general AI (AGI or strong AI). The narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations. AGI would outperform humans at nearly every cognitive task.
AI technologies are sure to continue disrupting our world, from automating routine office tasks to solving urgent challenges like climate change and hunger. But as incidents such as wrongful arrests in the U.S. and the mass surveillance of China’s Uighur population demonstrate, we are also already seeing some negative impacts stemming from AI. Focused on pushing the boundaries of what’s possible, companies, governments, AI practitioners, and data scientists sometimes fail to see how their breakthroughs could cause social problems until it’s too late.
Deepfakes can sow doubt and discord
Deepfakes are realistic-appearing artificial images, audio, and videos, typically created using machine learning methods. The technology to produce such “synthetic” media is advancing at breakneck speed. Plus the sophisticated tools are now freely and readily accessible, even to non-experts. Malicious actors already deploy such content to ruin reputations and commit fraud-based crimes, and it’s not difficult to imagine other injurious use cases.
Deepfakes create a twofold danger: that the fake content will fool viewers into believing fabricated statements or events are real, and that their rising prevalence will undermine the public’s confidence in trusted sources of information.
Large language models as disinformation force multipliers
Large language models are another example of AI technology developed with non-negative intentions that still merits careful consideration from a social impact perspective. These models learn to write humanlike text using deep learning techniques that are trained by patterns in datasets, often scraped from the internet. Leading AI research company OpenAI’s latest model, GPT-3, boasts 175 billion parameters — 10 times greater than the previous iteration. This massive knowledge base allows GPT-3 to generate almost any text with minimal human input, including short stories, email replies, and technical documents.
The path to ethical, socially beneficial AI
AI may never reach the nightmare sci-fi scenarios of Skynet or the Terminator, but that doesn’t mean we can shy away from facing the real social risks today’s AI poses. By working with stakeholder groups, researchers and industry leaders can establish procedures for identifying and mitigating potential risks without overly hampering innovation. After all, AI itself is neither inherently good nor bad. There are many real potential benefits that it can unlock for society — we just need to be thoughtful and responsible in how we develop and deploy it.