Apple is to boost its Siri voice assistant and operating systems with OpenAI’s ChatGPT as it seeks to catch up in the AI race.
The iPhone maker announced the Siri makeover along with several other new features at its annual developers show.
The move has magnified, once again, the growing role generative AI is set to play in all our lives.
Now, one of the world’s leading experts in this area has issued his advice on how to best utilise its benefits.
Henry Ajder, the presenter of the BBC’s The Future Will Be Synthesised, is one of the world’s best AI Speakers and travels the globe talking about the subject.
He said: “I really feel we need to make sure we resist that knee jerk reaction to use it for the sake of using it and build a comprehensive generative AI strategy across your business.
“You need to make sure your data is being used effectively, to make sure that your customers are getting the value from potential applications, and also that your employees are using it in a way which actually benefits them and makes them more productive and isn’t just a novelty – that actually is a solution looking for a problem.”
His top tips are:
Resist the rush to implement AI: “Interrogate your organisational structures and resist that knee-jerk to just put something in place for the sake of putting it in place. So at the moment, we’re seeing this rush of excitement and hype around generative AI and synthetic content and this has really led in some respects to a bit of a wild west dynamic about how people are using these tools. Are they using them ethically? Are they being used in a way where consumers are being protected, their customers are being protected? And so for me as someone who’s been working on responsible AI and responsible generative AI in particular for about six years now, I’m used to working with organisations, helping them understand how they can do this in the right way, and how they can make sure that both reputational and also legally they’re protected for long term.”
Understand the legal implications: “When it comes to incorporating responsible AI in your business, it’s important look at the end-to-end pipeline of how that’s going to impact your data, your employees, your customers, and indeed your compliance with certain legal frameworks. This is such a fast moving space, it can be hard to understand what the legislative landscape might look like in two months time or what consumer attitudes might look like in two months time.”
Consider the customer experience: “With generative AI, and it’s something that a lot of businesses and organisations I think are trying to understand: you must consider what does authenticity mean in the age of generative media and AI generated content? And indeed, for me, authenticity, no longer is opposed to synthetic. We’re seeing experiences like these lightening up to create much more personalised content for people helping people engage with that content in a way that they weren’t able to before, or people interacting with, for example, virtual influencers and building these kinds of slightly strange but interesting relationships with these characters, or indeed living in kind of virtual worlds as we’re seeing with certain extended reality virtual reality applications. Generative AI will be the engine for a lot of those applications moving forwards.
Re-assess cybersecurity: “So when it comes to cybersecurity, we’re so used to trusting audio visual media, as something that can’t be faked or can’t be manipulated. But we need to radically reassess the landscape of cyber threats with the advance of generative AI and deepfakes is to make sure that we understand what it is we’re up against now. And this is not theoretical. This is something that’s happening right now as we speak. So generative AI and deep fakes for cybersecurity are a huge new frontier, and one of the most challenging ones for biometrics, and indeed for the kind of cybersecurity and business security procedures which now might be outdated based on these new developments.
Keep updated on legislation: “We see this kind of geopolitical dimension of this AI arms race between the big powers emerging where everyone is rushing to build the most powerful systems they can. And so, the harms are indisputable. And there’s something that we should be really worried about at the same time. We also want to be careful that we aren’t completely squashing innovation and that we’re not just allowing other organisations or other businesses in other countries to get ahead. So the regulation question right now is occupying much of the brain space of the governments around the world trying to understand how we can make AI work for us without it also potentially biting us on the other hand?
For more information visit: https://ai-speakers-agency.com/