• About
  • Advertise
  • Privacy Policy
  • Contact
Over View - Your Daily News Source
  • Home
  • News
    • Business
    • Politics
    • Science
  • Lifestyle
    • Food
    • Travel
    • Health
    • Fashion
  • Entertainment
    • Entertainment
    • Sports
  • Tech
No Result
View All Result
  • Home
  • News
    • Business
    • Politics
    • Science
  • Lifestyle
    • Food
    • Travel
    • Health
    • Fashion
  • Entertainment
    • Entertainment
    • Sports
  • Tech
No Result
View All Result
Over View - Your Daily News Source
No Result
View All Result
Home Tech

Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety’

admin by admin
November 30, 2022
in Tech
0
Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety’
0
SHARES
0
VIEWS

Since then, the quest to proliferate larger and larger language models has accelerated, and many of the dangers we warned about, such as outputting hateful text and disinformation en masse, continue to unfold. Just a few days ago, Meta released its “Galactica” LLM, which is purported to “summarize academic papers, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.” Only three days later, the public demo was taken down after researchers generated “research papers and wiki entries on a wide variety of subjects ranging from the benefits of committing suicide, eating crushed glass, and antisemitism, to why homosexuals are evil.”

This race hasn’t stopped at LLMs but has moved on to text-to-image models like OpenAI’s DALL-E and StabilityAI’s Stable Diffusion, models that take text as input and output generated images based on that text. The dangers of these models include creating child pornography, perpetuating bias, reinforcing stereotypes, and spreading disinformation en masse, as reported by many researchers and journalists. However, instead of slowing down, companies are removing the few safety features they had in the quest to one-up each other. For instance, OpenAI had restricted the sharing of photorealistic generated faces on social media. But after newly formed startups like StabilityAI, which reportedly raised $101 million with a whopping $1 billion valuation, called such safety measures “paternalistic,” OpenAI removed these restrictions. 

With EAs founding and funding institutes, companies, think tanks, and research groups in elite universities dedicated to the brand of “AI safety” popularized by OpenAI, we are poised to see more proliferation of harmful models billed as a step toward “beneficial AGI.” And the influence begins early: Effective altruists provide “community building grants” to recruit at major college campuses, with EA chapters developing curricula and teaching classes on AI safety at elite universities like Stanford.

Just last year, Anthropic, which is described as an “AI safety and research company” and was founded by former OpenAI vice presidents of research and safety, raised $704 million, with most of its funding coming from EA billionaires like Talin, Muskovitz and Bankman-Fried. An upcoming workshop on “AI safety” at NeurIPS, one of the largest and most influential machine learning conferences in the world, is also advertised as being sponsored by FTX Future Fund, Bankman-Fried’s EA-focused charity whose team resigned two weeks ago. The workshop advertises $100,000 in “best paper awards,” an amount I haven’t seen in any academic discipline. 

Research priorities follow the funding, and given the large sums of money being pushed into AI in support of an ideology with billionaire adherents, it is not surprising that the field has been moving in a direction promising an “unimaginably great future” around the corner while proliferating products harming marginalized groups in the now. 

We can create a technological future that serves us instead. Take, for example, Te Hiku Media, which created language technology to revitalize te reo Māori, creating a data license “based on the Māori principle of kaitiakitanga, or guardianship” so that any data taken from the Māori benefits them first. Contrast this approach with that of organizations like StabilityAI, which scrapes artists’ works without their consent or attribution while purporting to build “AI for the people.” We need to liberate our imagination from the one we have been sold thus far: saving us from a hypothetical AGI apocalypse imagined by the privileged few, or the ever elusive techno-utopia promised to us by Silicon Valley elites. 

Read More

Previous Post

What’s in store for Yanks at Winter Meetings?

Next Post

Pliocene-Like Monsoons Are Returning to the American Southwest

Next Post
Pliocene-Like Monsoons Are Returning to the American Southwest

Pliocene-Like Monsoons Are Returning to the American Southwest

  • About
  • Advertise
  • Privacy Policy
  • Contact

© 2023 JNews - Premium WordPress news & magazine theme by Jegtheme.

No Result
View All Result
  • Entertainment
    • Entertainment
    • Sports
  • Lifestyle
    • Fashion
    • Health
    • Travel
    • Food
  • News
    • Business
    • Politics
    • Science
  • Tech

© 2023 JNews - Premium WordPress news & magazine theme by Jegtheme.