Eric Schmidt Awards A.I. Researchers $10M to Study Safety – Casson Living – World News, Breaking News, International News

Eric Schmidt Awards A.I. Researchers M to Study Safety – Casson Living – World News, Breaking News, International News

Eric Schmidt’s New Initiative in A.I. Safety

Exploring the field of Artificial Intelligence (A.I.) is a familiar journey for Eric Schmidt, the former CEO of Google. Over the years, he has put his resources into a variety of A.I. startups, including Stability AI, Inflection AI, and Mistral AI. However, Schmidt is shifting gears by unveiling a $10 million initiative focused on tackling the safety concerns that come with this revolutionary technology.

This funding will help create an A.I. safety science program under Schmidt Sciences, a nonprofit organization established by Schmidt and his wife, Wendy. Michael Belinsky will spearhead the program, aiming to emphasize scientific inquiry into A.I. safety rather than merely highlighting the potential dangers. “Our goal is to pursue academic research that seeks to understand why certain elements are inherently unsafe,” Belinsky noted.

As part of this initiative, over two dozen researchers have already been selected to receive grants that could reach up to $500,000. Alongside financial backing, these scholars will gain access to essential computational resources and A.I. models. The program is designed to adapt alongside the fast-paced developments in the A.I. sector. “Our focus is on addressing the challenges presented by contemporary A.I. systems, not outdated iterations like GPT-2,” Belinsky pointed out.

Prominent figures in the research community, such as Yoshua Bengio and Zico Kolter, are among the first recipients of this funding. Bengio aims to create technologies for mitigating risks in A.I. systems, while Kolter intends to investigate issues like adversarial transfer. Another grantee, Daniel Kang, is keen on examining whether A.I. agents could potentially launch cybersecurity attacks, emphasizing the inherent risks tied to A.I. capabilities.

While excitement for A.I. continues to thrive in Silicon Valley, there are growing worries that safety issues might be taking a backseat. The new initiative by Schmidt Sciences aims to counteract these concerns by eliminating obstacles that impede A.I. safety research. By encouraging collaboration between academia and industry, researchers like Kang are hopeful that leading A.I. companies will incorporate findings from safety research into their development processes.

As the landscape of A.I. evolves, Kang stresses the significance of maintaining open dialogue and transparent reporting in the evaluation of A.I. models. He highlights the necessity for responsible practices among major labs to ensure that the development of A.I. technology remains safe and ethical.

In summary, Eric Schmidt’s $10 million pledge towards A.I. safety highlights the critical need to prioritize research and innovation in addressing the challenges and risks posed by this transformative technology.