What are some of the key challenges associated with AI safety?
While the opportunity to increase safety and efficiency with AI tools is tremendous, it is crucial for all stakeholders to consider AI’s limitations when designing solutions. Software engineers, who are at the forefront of AI development, must grapple with ethical and security considerations and communicate AI’s limitations to stakeholders and clients. In turn, those adopting AI tools must innovate responsibly to deliver meaningful benefits across the population.
There is a risk that the data sets used to train AI models are geographically or socially biased. It is important that AI adoption is equitable, not displacing jobs, concentrating wealth or providing unequal benefits.
Adoption of AI systems and services can introduce new vulnerabilities, both in terms of data privacy and from cyberattacks.
Growth in AI and the data centres required for data storage is resulting in increasing demands on critical raw materials, fresh water, and energy while contributing to global carbon emissions.
What key roles do the industry, government and professional engineers play in harnessing AI for a safer world?
To harness the full potential of AI and ensure everyone benefits from the opportunities it brings, it must be developed and deployed in a way that is safe and ethical both now and in the future. This requires collaboration between professional engineers, government and industry. Each stakeholder group has a vital role to play to ensure AI is developed responsibly based on representative data sets, is subjected to active risk management and robust testing and is applied to the right problems with appropriate monitoring and evaluation.
This spotlight highlights the specific skills, capabilities and responsibilities required by each group, emphasising the importance of their collective efforts in shaping the responsible use of AI. Together, these stakeholders can pave the way for a future where AI-driven practices are effective, safe and ethically sound.