Nightingale HQ Blog

Stay up to date with our most recent news and updates

    Ethical AI monitoring in the post-COVID workplace

    Published on 14-Dec-2020 09:45:00


    With the excellent news of the first COVID-19 vaccines being administered in the UK, things could be back to normal by next winter. In the meantime, we must remain vigilant and take precautions such as monitoring temperatures, social distancing, and other health and safety measures, to keep the impact of the virus to a minimum. AI can help to enforce these safety precautions, however ethical conduct and compliance with data privacy regulations remain imperative. 

    Let’s take a look at some of the use cases and necessary considerations.

    AI monitoring 

    With over 60,000 COVID-19 related deaths in the UK, protecting those who must continue to go to work is paramount. Following checklists is not enough, but we can rely on tech to help us achieve a safer workplace. Using IoT and edge computing combined with CCTV, we can create systems that track workers and initiate alerts when issues arise. This method is commonly used in warehouses to check for safety risks on the shop floor e.g. to detect whether someone is wearing a helmet in required areas, but it could also be applied to:

    • Social distancing
    • PPE checks
    • Temperature checks

    Storing sensitive data

    Unfortunately, a number of compliance and ethical problems come with collecting such sensitive data. Data collected with intentions to keep people safe from COVID-19 may be considered as medical or biometric data, which if breached, could put your employees at risk.

    In some cases, authorities can gain access to track and trace data to trigger prosecutions. There are currently 138 pieces of UK legislation that give various government investigators the right to obtain warrants for electronic data and devices. Therefore, by storing certain data on your employees, there is an immediate risk of law enforcement agencies requesting access to data which could get your employees in trouble.

    Finally, this data is also subject to internal privacy risk and insider threat. Harassment, abuse and crime can occur if employees get their hands on such data and try to leverage it for their own gain.

    Keeping it fair 

    AI bias is becoming a well-documented phenomenon, with cases like discriminating facial recognition solutions encoding bias against under-represented minorities. This could result in your AI monitoring solution not work consistently across all your employees. If it doesn’t work consistently, does that put some people at greater risk as they aren’t being correctly identified for not social distancing? This is why it is so important to consider bias, look out for it and avoid it.

    Transparency in automation

    When we use AI solutions for automated or assisted decision making, we need to think about the consequences of getting it wrong. Additionally, under the GDPR, individuals have the right to know that automated decision-making is being used and to an explantation of the logic involved. It is important to consider the possible downfalls of a solution and how they might affect your staff, e.g. could it aggravate mental health issues or cause financial distress?

    Taking error into account

    When it comes to AI solutions around COVID-19, speed has been crucial. However with fast development of a significant number of solutions, comes lots of room for error. The bases of the solutions may not be medically sound, or be able to obtain sufficient precision to the level required. The developers may have wrongly interpreted or forgotten to account for the risk of false positives in their interpretations. This can lead to staff being penalised in error. It is important to weigh up whether the risk is worth the potential impact on employees if it goes wrong, with the solution's positives.

    How to do it right

    "Trust in AI systems is becoming, if not already, the biggest barrier for enterprises." – Tracy Pizzo Frey, Google

    In order to build trust in AI systems and pull off AI solutions without any ethical or compliance infringements, you need to engage your staff in much of the process and check off the following points:

    • Consult
    • Consent
    • Policies
    • Safeguards
    • Suppliers
    • Validation

    Consult your staff about the aims of your proposed solution and about what they consider to be appropriate. Talk about measures that would help them to feel more comfortable. You also need to ensure staff have a clear way to consent to being monitored and to the retention of data.

    If you will be handling and storing data, you must ensure you have policies in place denoting how it will be handled, and what appeals processes you will have. Here you should also lay out your safeguard measures to minimise the risk of incorrect actions, inappropriate access, etc.

    If you acquire your solution from a supplier, you must clarify exactly what data they will receive, how they will process it, and any ways they may monetise your employee’s data. Your final consideration should be how you will make sure that the solution is fair to all your employees. You need to validate the solution before investing in it and ensure it lives up to the standards you expect.

    In summary:

    • AI has and continues to play a vital role in the pandemic, but there are still many ethical considerations around AI you need to make in order to stay compliant.
    • IoT and edge computing has given way to visual AI solutions for monitoring COVID-19.
    • The risk of collecting employee data for these solutions is the impact on employees if the data is breached, requested by authorities, or exposed to insider risk.
    • Considerations include fair/unbiased solutions and use of data, the impact of using such solutions to your employees, and the consequences of potential errors.
    • To build trust in AI solutions, you can follow 7 key points to keep staff engaged and ensure compliance.

    Topics: AI Ethics, covid19

    Steph Locke

    Written by Steph Locke

    Steph Locke, CEO of Nightingale HQ, is an accomplished data scientist who has helped thousands of businesses during her time working in industry, as a consultant, and as an international keynoter and author.