5 minute read

AI in HVAC&RPart 2: Risks

We’ve looked at the ethics around using artificial intelligence (AI) in HVAC&R, but what about the risks involved?

Words by Arsen Ilhan

The National Institute of Standards and Technology (NIST) identifies several types of harm that AI can cause. Negative impact or harm can be experienced by individuals, groups, communities, organisations, society, the environment and the planet. These harms can range from privacy violations and biased decision-making to economic disruptions.

Understanding and mitigating these risks is crucial to ensure the responsible development and deployment of AI technologies. By proactively addressing these potential harms, organisations and policymakers can foster a more equitable and sustainable integration of AI into various aspects of society. This includes implementing robust ethical guidelines, enhancing transparency in AI systems, and promoting inclusive practices that consider the diverse needs and values of all stakeholders. Additionally, continuous monitoring and evaluation of AI impacts are necessary to adapt to emerging challenges and ensure long-term positive outcomes for humanity and the environment.

The impact on individuals

AI-driven HVAC&R systems have the potential to impact individuals and groups by perpetuating biases that lead to unequal treatment. For example, if an AI system prioritises energy efficiency over occupant comfort based on biased data, certain individuals or groups, such as those in less-frequently occupied areas or with non-standard schedules may experience discomfort. To address this, HVAC&R systems must be designed to consider the diverse needs of all occupants.

This can be achieved by using inclusive training data and incorporating fairness metrics to ensure equitable temperature control and air quality across different demographic groups. According to Luísa Nazareno and Daniel S. Schiff’s study, The impact of automation and artificial intelligence on worker well-being, automation can affect worker well-being through five hypothetical channels through which automation may impact worker well-being: influencing worker freedom, sense of meaning, cognitive load, external monitoring, and insecurity.

Organisational impact

Within organisations, the integration of biased AI in HVAC&R systems can lead to a range of adverse outcomes that extend beyond energy and environmental conditions. AI algorithms may increase the risk of data breaches and cybersecurity threats. Biased algorithms might inadvertently compromise sensitive data if not properly secured, leading to breaches that can harm both the organisation and its stakeholders. Privacy concerns also arise when AI systems use personal data without consent or fail to anonymise information adequately, potentially violating data protection regulations and eroding consumer trust. Operationally, biased AI may introduce reliability issues and dependency on flawed decision-making processes, affecting the system’s performance and user satisfaction.

These operational risks can lead to increased maintenance costs and downtime, impacting overall business continuity. Furthermore, ensuring regulatory compliance becomes challenging when AI systems produce biased outcomes that violate fairness and non-discrimination laws, potentially exposing organisations to legal liabilities and reputational damage. Financially, the costs associated with addressing these issues, including legal fees, fines, and reputation management, can be substantial, undermining the financial stability of the organization.

Risks to the environment

The environmental impact of AI-driven HVAC&R systems is another critical concern. While AI can optimise energy use and reduce greenhouse gas emissions, biased algorithms might overlook the specific environmental needs of certain areas. For example, an AI system trained primarily on urban data might not adequately address the unique challenges of rural or ecologically sensitive areas. Ensuring that AI systems are environmentally responsible involves training algorithms on diverse environmental data and continuously monitoring their impact on different ecosystems. Additionally, incorporating sustainability goals into the design of AI-driven HVAC&R systems can help balance energy efficiency with environmental stewardship.

In conclusion, the integration of AI into building services represents a transformative leap towards efficiency, sustainability, and enhanced user experience. AI-driven technologies promise to optimise energy consumption, improve maintenance practices and tailor building environments to meet individual needs more effectively than ever before. However, this advancement must be approached with careful consideration of ethical implications to ensure these benefits are equitably distributed and sustainable over the long term. Addressing biases, ensuring transparency, safeguarding data privacy, and prioritising fairness are paramount to fostering trust and inclusivity in AI-driven technologies.

By embracing ethical AI practices and continuously monitoring their impacts, stakeholders can navigate potential risks effectively and harness the full potential of AI to create environments that are not only technologically advanced but also equitable and environmentally responsible.

Transparency throughout the AI lifecycle – from data collection to algorithm deployment – is essential for building trust. As we navigate the future of AI in building services, a proactive commitment to ethical standards will be instrumental in shaping a future where AI enhances our lives while respecting fundamental human values and societal wellbeing. By embracing ethical AI practices and continuously monitoring their impacts, stakeholders can foster innovation that is not only technologically advanced but also ethical, equitable, and environmentally responsible. This approach not only safeguards against potential risks but also maximises the positive impact of AI on our built environment.

Arsen is a mechanical engineer and currently holds the position of marketing coordinator at the Women in Engineering committee. She actively advocates for increased representation of women in the industry, inspiring and encouraging them to pursue rewarding careers in engineering through Women in Engineering and Women of AIRAH. She has also been a mentor at the Royal Academy of Engineering (UK). As an AI enthusiast, she explores machine learning and its practical applications. She was recognised as a finalist for Young Engineer of the Year by CIBSE in 2023.

This article is from: