Have you ever pondered who should be responsible for making decisions regarding artificial intelligence? Eric Schmidt, the ex-CEO of Google, believes that this responsibility shouldn’t rest solely with tech experts.
In a recent discussion with ABC, Schmidt voiced his apprehensions about the swift progress in AI technologies. He cautioned that AI might evolve to a level beyond human comprehension, which could pose serious threats to society.
Alongside other technology specialists, Schmidt underscored the necessity for protective measures to ensure that AI does not gain excessive autonomy. He even went as far as to suggest that there might come a point where we may need to “pull the plug” on AI to avert any potential dangers.
But who should hold the authority to make such pivotal decisions? Schmidt argues that it shouldn’t just be technologists like him making the call. He advocates for the inclusion of a varied group of stakeholders to help create frameworks for the development and application of AI.
Interestingly, Schmidt also introduced the idea of utilizing AI itself to oversee AI technology. He proposed that while humans might struggle to manage AI effectively, AI systems could potentially regulate their own progress.
Though Schmidt’s viewpoint may appear unconventional, it prompts crucial discussions about the future of AI and the significance of human oversight in its progression. As technology advances at an extraordinary rate, it’s essential to contemplate how we can ensure that AI aligns with humanity’s best interests.