
Update on the State of AI Regulation in the Employment Context
In recent years, the application of artificial intelligence (AI) within human resources (HR) has generated significant regulatory attention. Following the Trump administration’s AI Action Plan released on July 23, 2025, the landscape of AI regulation in the employment context is poised to shift markedly. This comprehensive overview aims to summarize the developments, implications, and potential future of AI regulation following this pivotal plan.
The Context of AI in Human Resources
AI’s integration into HR practices often presents both opportunities and challenges. As explored in previous discussions, the use of AI in hiring processes and employee performance assessments raises critical concerns surrounding algorithmic discrimination. Laws such as the Colorado Artificial Intelligence Act (CAIA) have emerged in response to these issues, mandating employers to exercise reasonable care to prevent discriminatory practices perpetuated by AI systems.
The CAIA, established in 2021, imposes a legal obligation on companies utilizing AI in hiring to demonstrate a robust risk management framework. Employers are assumed to engage in reasonable care if they align their practices with standards set forth by the National Institute of Standards and Technology (NIST). This provides a legal safety net for companies able to showcase adherence to these guidelines.
Overview of the AI Action Plan
President Trump’s July 23, 2025, AI Action Plan introduced three main pillars aimed at fostering innovation, developing infrastructure, and enhancing international collaboration on AI. Accompanying this plan were three executive orders that underscore a strategic shift in how AI regulation is viewed and implemented at both federal and state levels.
Innovation vs. Regulation: The Action Plan emphasizes a preference for innovation over regulation, indicating that while states maintain the right to devise their own laws, these cannot be “unduly restrictive” to innovation. This sets a precedent where federal intervention can occur if state regulations are perceived as barriers to technological advancement.
Funding Conditions: Notably, the plan delineates that federal funding programs should consider a state’s regulatory environment when allocating resources. States with stringent AI regulations may face limitations or reductions in federal funding—effectively coercing them to align their laws with federal expectations.
- NIST Framework Revisions: One of the most significant implications involves a proposed revision of the NIST framework, specifically targeting the removal of references to Diversity, Equity, and Inclusion (DEI). If enacted, this could fundamentally alter the standards companies use to navigate AI risk management.
Implications for Employers
The revisions to the NIST framework and the potential federal push against state-level regulations introduce uncertainty for employers relying on AI in their HR practices. Several key considerations emerge from the recent developments:
Risk Management Alterations: The removal of DEI considerations from the NIST framework could disincentivize companies from prioritizing diversity in hiring processes. Employers may find themselves without a guiding structure that emphasizes DEI, eventually leading to increased risks of algorithmic discrimination.
State Regulations and Compliance: Companies operating in states with proactive AI regulations, such as Colorado, may face conflicting requirements. While federal guidelines may suggest leniency, state laws could impose stricter obligations, particularly concerning workforce diversity and anti-discrimination practices.
- Increased Scrutiny and Monitoring: Employers must remain vigilant in tracking changes to both federal and state AI regulations. This vigilance is imperative to ensure compliance and mitigate risks associated with discriminatory AI practices. It is essential for HR departments to continuously review their AI strategies and risk management programs.
Future Perspectives
As regulatory landscapes evolve, the impact of the AI Action Plan and the corresponding executive orders remains uncertain. While the intent behind these changes purportedly aims to spur innovation, the ramifications for employees and candidates are increasingly ambiguous.
Balancing Innovation and Accountability: As AI plays a more central role in HR processes, finding a balance between fostering innovation and ensuring accountability will be crucial. Employers must navigate this tightrope carefully to safeguard their workforce from biased outcomes driven by AI technologies.
Potential for Legal Challenges: Without robust safeguards in place, companies may find themselves the target of legal challenges regarding discriminatory practices in hiring and employment assessments. This risk underscores the importance of proactive compliance and adherence to evolving standards.
- Continued Advocacy for DEI: Despite federal moves to downplay DEI, advocacy for inclusive workplace practices will likely continue from various stakeholders, including employees and nonprofit organizations. Companies that proactively embrace DEI may bolster their reputations, thereby attracting a diverse talent pool while mitigating risks.
Conclusion
The regulatory environment for AI in the employment sector is in flux, shaped by the recent actions of the Trump administration. As companies navigate this evolving landscape, they must prioritize compliance with both federal and state laws while fostering an ethical approach to the utilization of AI in HR. The balancing act of promoting innovation without compromising the fundamental rights of employees will define the future of AI regulation in the workplace. Companies are encouraged to stay informed, adapt practices accordingly, and advocate for policies that uphold fairness and inclusivity in their hiring processes.







