We need to bring consent to AI 

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. This week’s big news is that Geoffrey Hinton, a VP and Engineering Fellow at Google, and a pioneer of deep learning who developed some of the most important techniques at the heart…
We need to bring consent to AI 

And oh boy did he have a lot to say. “I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future,” he told Will. “How do we survive that?” Read more from Will Douglas Heaven here.

Even Deeper Learning

A chatbot that asks questions could help you spot when it makes no sense

AI chatbots like ChatGPT, Bing, and Bard often present falsehoods as facts and have inconsistent logic that can be hard to spot. One way around this problem, a new study suggests, is to change the way the AI presents information. 

Virtual Socrates: A team of researchers from MIT and Columbia University found that getting a chatbot to ask users questions instead of presenting information as statements helped people notice when the AI’s logic didn’t add up. A system that asked questions also made people feel more in charge of decisions made with AI, and researchers say it can reduce the risk of overdependence on AI-generated information. Read more from me here

Bits and Bytes

Palantir wants militaries to use language models to fight wars
The controversial tech company has launched a new platform that uses existing open-source AI language models to let users control drones and plan attacks. This is a terrible idea. AI language models frequently make stuff up, and they are ridiculously easy to hack into. Rolling these technologies out in one of the highest-stakes sectors is a disaster waiting to happen. (Vice

Hugging Face launched an open-source alternative to ChatGPT
HuggingChat works in the same way as ChatGPT, but it is free to use and for people to build their own products on. Open-source versions of popular AI models are on a roll—earlier this month Stability.AI, creator of the image generator Stable Diffusion, also launched an open-source version of an AI chatbot, StableLM.   

How Microsoft’s Bing chatbot came to be and where it’s going next
Here’s a nice behind-the-scenes look at Bing’s birth. I found it interesting that to generate answers, Bing does not always use OpenAI’s GPT-4 language model but Microsoft’s own models, which are cheaper to run. (Wired

AI Drake just set an impossible legal trap for Google
My social media feeds have been flooded with AI-generated songs copying the styles of popular artists such as Drake. But as this piece points out, this is only the start of a thorny copyright battle over AI-generated music, scraping data off the internet, and what constitutes fair use. (The Verge)