U.S. Government Releases New AI Security Guidelines for Critical Infrastructure

The United States government has released new artificial intelligence security guidelines for critical infrastructure in a new announcement by the Department of Homeland Security (DHS). DHS and the Cybersecurity & Infrastructure Security Agency (CISA) are directly tasked with protecting critical infrastructure in the U.S., which includes 16 categories, including energy utilities, transportation, communications, healthcare, and more.

DHS and CISA’s announcement coincides with the White House’s order, National Security Memorandum-22 (NSM-22). The announcement is a long-awaited revision of the Obama-era directive on critical infrastructure (PPD-21).

When the original PPD-21 was unveiled in 2013, CISA did not yet exist as an agency. Now, it headlines the new memorandum as the national coordinator of cybersecurity concerns for critical infrastructure.

In response to U.S. President Biden’s Executive Order (EO) ensuring AI’s safe and secure use, DHS and CISA are responsible for leveraging AI to “advance homeland security mission while protecting individuals’ privacy, civil rights, and civil liberties,” according to the press release.

Recognizing AI’s immense potential and inherent risks, the DHS aims to establish a framework that promotes the responsible development and deployment of this technology across various sectors.

Safe, Responsible, and Trustworthy Use of AI

The new initiative creates an AI roadmap that details the department’s 2024 plans with three major lines of effort:

  • Responsibly leverage AI to advance Homeland Security missions while protecting individuals’ privacy, civil rights, and civil liberties.
  • Promote Nationwide AI Safety and Security.
  • Continue to lead in AI through strong, cohesive partnerships.

Implementing pilot programs to test AI technology will allow DHS to test AI mission applicability in specific areas of focus:

  • Aiding in the detection and remediation of critical vulnerabilities in U.S. Government software, systems, and networks
  • Homeland Security Investigations (HSI) will use AI to enhance investigative processes focused on detecting fentanyl and increasing the efficiency of investigations related to combatting child sexual exploitation.
  • The Federal Emergency Management Agency (FEMA) will deploy AI to help communities plan for and develop hazard mitigation plans to build resilience and minimize risks.
  • United States Citizenship and Immigration Services (USCIS) will use AI to improve immigration officer training.

Promoting Trust and Innovation

The DHS has outlined five key principles to guide the responsible development and use of AI:

The DHS acknowledges the transformative power of AI, emphasizing its potential to enhance safety, security, and economic prosperity. To foster trust in AI systems, the department advocates for responsible practices that prioritize:

  • Fairness and Equity: Ensuring AI systems are free from bias and discrimination, promoting equitable outcomes for all.
  • Transparency and Explainability: Making AI systems understandable to users, allowing them to comprehend the decision-making processes involved.
  • Safety and Security: Implementing robust measures to protect AI systems from malicious attacks and unintended consequences.
  • Privacy: Safeguarding individual privacy rights and ensuring responsible data collection and use within AI systems.

Guiding Principles for AI Development

The DHS has outlined five key principles to guide the responsible development and use of AI:

  • Lawful and Ethical Use: AI systems must operate within the bounds of the law and adhere to ethical principles.
  • Purposeful Design and Deployment: AI development should be driven by clear objectives and consider potential societal impacts.
  • Accountability and Oversight: Mechanisms for accountability and oversight should be established to ensure responsible use of AI.
  • Transparency and Explainability: The decision-making processes of AI systems should be transparent and understandable to users.
  • Safety and Security: AI systems should be designed with safety and security as paramount concerns.

The DHS recognizes that responsible AI development requires a collective effort. The department encourages stakeholders across various sectors to join in this endeavor, ensuring that AI technologies are utilized for the betterment of society.

While many government organizations – including the Department of Defense – have conflicting opinions on using AI and Generative AI, it’s important for each agency to at least have a plan for assessing the technologies. It’s refreshing to see the DHS and CISA organizations lean into AI, generative AI, and areas of responsibly testing mission applicability.

The U.S. critical infrastructure remains the country’s greatest cybersecurity attack vector, with repeated cyberattack attempts from foreign adversaries in recent years alone. On May 7, 2021, a major ransomware attack disrupted the U.S. Colonial Pipeline.

You can access the DHS press release for a complete list of the initiative’s announcements.

Leave a Reply

You May Also Like

White House Prioritizes Skill-Based Hiring for Cybersecurity and Tech Jobs

The Office of the National Cyber Director shifts to skill-based hiring for all federal agencies and workers in cyber and tech.

LastPass developer systems hacked for source code

BleepingComputer reports that sources confirmed to the security blog that LastPass, a…

CISA, FBI warn of Daixin ransomware attacks

The Cybersecurity & Infrastructure Security Agency (CISA), Department of Health and Human…