Enterprises are adopting generative AI in a big way. We’re elevating work and transforming business processes from sales enablement to security operations. And we’re getting massive benefits: increasing productivity, improving quality, and accelerating time to market.
With this advancement comes an equal need for consideration of the risks. These include software vulnerabilities, cyberattacks, improper system access, and sensitive data exposure. There are also ethical and legal considerations, such as copyright or data privacy law violations, bias or toxicity in the generated output, the propagation of disinformation and deep fakes, and a furthering of the digital divide. We’re seeing the worst of it in public life right now, with algorithms used to spread false information, manipulate public opinion, and undermine trust in institutions. All of this highlights the importance of security, transparency, and accountability in how we create and use AI systems.
There is good work afoot! In the U.S., President Biden’s Executive Order on AI aims to promote the responsible use of AI and address issues such as bias and discrimination. The National Institute of Standards and Technology (NIST) has developed a comprehensive framework for AI systems’ trustworthiness. The European Union has proposed the AI Act, a regulatory framework to ensure the ethical and responsible use of AI. And the AI Safety Institute in the U.K. is working towards developing safety standards and best practices for AI deployment.
The responsibility for establishing a common set of AI guardrails ultimately lies with the government, but we’re not there yet. Today, we have a rough patchwork of guidelines that are regionally inconsistent and unable to keep up with the rapid pace of AI innovation. In the meantime, the onus for its safe and responsible use will be on us: AI vendors and our enterprise customers. Indeed, we need a set of guardrails.
A new matrix of obligations
Forward-thinking companies are getting proactive. They’re creating internal steering committees and oversight groups to define and enforce policies according to their legal obligations and ethical standards. I’ve read more than a hundred requests for proposals (RFPs) from these organizations, and they’re good. They’ve informed our framework here at Writer for building our own trust and safety programs.
One way to organize our thinking is in a matrix with four areas of obligation: data, models, systems, and operations; and plot them across three responsible parties: vendors, enterprises, and governments.
Guardrails within the “data” category include data integrity, provenance, privacy, storage, and legal and regulatory compliance. In “models,” they’re transparency, accuracy, bias, toxicity, and misuse. In “system,” they’re security, reliability, customization, and configuration. And in “operations,” they’re the software development lifecycle, testing and validation, access and other policies (human and machine), and ethics.
Within each guardrail category, I recommend enumerating your key obligations, articulating what’s at stake, defining what “good” looks like, and establishing a measurement system. Each area will look different across vendors, enterprises, and government entities, but ultimately they should dovetail with and support each other.
I’ve chosen a sample question from our customers’ RFPs and translated each to demonstrate how each AI guardrail might work.
As we transform business with generative AI, it’s crucial to recognize and address the risks associated with its implementation. While government initiatives are underway, today the responsibility for safe and responsible AI use is on our shoulders. By proactively implementing AI guardrails across data, models, systems, and operations, we can gain the benefits of AI while minimizing harm.
May Habib is CEO and co-founder of Writer.
More must-read commentary published by Fortune:
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.