For nine years, the Festival of Dangerous Ideas has been making Sydney squirm with a program that slaps you in the academic face, challenging social conventions, popular thought and normalised views about everything from the dark arts to the dark web. This year’s line-up has a hefty proportion of talks exploring the latter, or more broadly, the intersection of human communication and experience with machines.
As study prep for the festival, Time Out convened a mini-panel of FODI speakers to talk about the dangers and opportunities presented by artificial intelligence (AI) and the internet of things. We learnt three confronting lessons.
Lesson 1: The Cylons won’t take over the world
While Toby Walsh predicts we’ll see robots as smart as humans by 2062, he’s not worried about a Cylon invasion.
“We’ve been rather mislead by this idea that the robots are going to take over. The robots have no desires of their own, they do exactly what we tell them to,” he says.
“I’m much more worried about incompetence than malevolence – that we’ll get the machines to do something, and we haven’t thought carefully about how it’s going to interact with our complex world.”
And as a professor of artificial intelligence at UNSW, we’re probably safe to take him on his word. At FODI, he’ll deliver a keynote speech about how we should prepare for a future where AI is an integral part of society.
“Healthcare, transport, how we manufacture things, how we educate ourselves, how we go out and play – it’s going to touch almost every aspect of our lives.”
Walsh recognises the current issues that have come out of technological development, but maintains that measured progress is the only way forward.
“The only hope we have to deal with all these wicked problems like climate change, increasing inequality and the ongoing refugee problem, is if we embrace technology and use the world’s resources in a better, more sustainable way.
“The future is a product of the decisions we make today. Society shapes technology and technology can shape society.”
Lesson 2: There’s nobody drawing the line for sex robots
Artificial intelligence has already made its way into the bedroom. Lifelike, human-sized dolls can now be purchased to fulfil sexual desires and provide physical gratification. While ‘sexbots’ can be used responsibly, senior lecturer in criminology at the University of Newcastle Xanthe Mallett says their popularisation comes with a host of problems, one being the potential to cultivate derogatory and harmful sexual expectations.
“A robot doesn’t have any ability to say no, there are no boundaries, so are we potentially normalising abuse? It’s a very dangerous precedent.”
Mallett says a particularly worrying trend is the use of sex robots that resemble children, known as ‘paedobots’, which are being actively marketed as a way to limit paedophilic tendencies.
“It’s been suggested by the manufacturer that it’s got some kind of clinical application in terms of reducing child sex offences, but there’s actually no evidence of that.”
Instead, Mallett suggests that it could have the opposite effect, normalising a physical assault on a child.
While Mallett has worked within the criminal system regarding such offenses, this isn’t the only discussion her and her fellow panelists will have during their talk at the festival.
“‘Sexbots’ is kind of the headliner, but it's actually about something much bigger. It’s about interaction between humans and where AI comes into that.
“We’re seeing more breakdowns in communication between people, and I think this is another step in that line, and we’ll need to consider what the implications are before this becomes more mainstream – [for example] are you cheating on your partner if you’ve got a sexbot?”
Lesson 3: Big Brother may be keeping us honest
It seems that in a world where we talk less, we’re communicating more than ever. The internet has revolutionised our concept of socialising, learning and information sharing, but it has also revealed how dishonest we are to those around us.
Seth Stephens-Davidowitz – former Google data scientist and author of Everybody Lies, a 2017 novel investigating big data – says he trusts the internet more than his neighbours.
“People are more honest to Google than they are to other people.
“If you ask people if they’re racist, very few people say yes, but many people make searches for racist jokes. If you ask people, in a place where it’s hard to be gay if they’re gay, they might say no, but they might search for and watch gay porn.”
But it’s a different story on our social feeds.
“In social media we’re less honest, because we’re cultivating an image. If you look at how people describe their husbands on social media they say ‘my husband is the best, my husband is my best friend, my husband is so cute’. But when they’re describing their husbands on Google when looking for information, it’s ‘my husband is a jerk, my husband is annoying’.”
It’s not all lies and broken hearts, though. Stephens-Davidowitz says while big data reflects some unsavoury truths and can be used to take advantage of consumers, it can also be a force for good.
“People frequently don’t admit their mental health problems because there’s a lot of stigma, but on Google they search [for information] about their anxiety or depression, or even suicidal thoughts, and we can potentially use this information to better treat people.”
And it’s these kind of positive technological advancements, along with darker, more worrying trends, that more than 30 FODI talks, panel discussions and artworks will explore this weekend. Check out the website for more details or to buy a weekend pass.