top of page

How To Instantly Protect Your Business, Staff, and Clients With an AI Policy

  • Writer: Dorien Morin-van Dam
    Dorien Morin-van Dam
  • Oct 27
  • 4 min read

If you run a small business or marketing agency, you do not have extra margin for avoidable risk. 


How do I know this? I am this ^^^. I am a small business owner and I work with several marketing agencies. 


Helloooo, you don't live in a cave, do you? AI and the use of AI carry risk.


For me, AI is already in my workflow through browsers, my phone, and my preferred tools. 


And for you, I know that AI is already in the workflow of most of your team members, employees, and freelancers. If this is a surprise to you...excuse me as I quickly roll my eyes at you. You should know this. 

Gears and Gadgets can help you set an AI policy for your business

And here is some harsh truth:

What you don't know can hurt you.

A clear AI policy will not slow you or your business down. I will set expectations and protect you, your team, customers, and clients.


An AI policy makes safe, efficient AI use possible for employees and freelancers alike.


Here are the stats we are working with right now 


Multiple recent studies show policy maturity lags behind usage. 

Traliant found only 60 percent of companies have an AI acceptable use policy, while a 2025 snapshot shows just 43 percent report an AI governance policy and 25 percent are still “in progress.” 


At the same time Microsoft reports that 78 percent of AI users bring their own tools to work. 

That is the reality your policy must meet. 


Shadow use (cool phrase he?) adds urgency for smaller teams that handle client data. Recent polling shows 59 percent of U.S. employees use unapproved AI at work (yes, they hide their tools!), and many share sensitive information when they do.


That combination is the risk you can reduce in a single policy sprint. 


Why your brand can no longer wait for an AI policy


The smaller the team, the bigger the blast radius from one bad copy 'n paste. 


An AI policy protects client trust, reduces rework, keeps your brand voice intact across staff and contractors and sets boundaries. 


It also gives your people permission to use AI where it helps most and clarity on where it does not belong.


A simple starter AI policy for agencies and small businesses


Before I give you this list, a quick promise. 


You do not need a 40-page manual. You need one page that everyone can find, read, and follow. That should be your goal when creating an AI policy right now.


I highly recommend you have a conversation with your whole team about this list BEFORE setting a policy. Your team has knowledge, ideas, and smart workflows. Instead of restricting them, ask what they are doing now and what's working. Be curious, not condescending. But it is your show, so if you think you can set these up without talking to your team... have a go at it. 🤷♀️


  1. Scope. Say where AI is approved, restricted, or prohibited. Name specific tools, browser extensions, and plugins your team may use.

  2. Data handling. List exactly what never goes into external tools. Include client PII, unreleased financials, health data, and any contracts. Require approved, logged tools for anything sensitive.

  3. Use cases. Approve a short list of what's allowed. Drafting outlines, first pass summaries, research planning, code scaffolding, meeting notes. Keep it public and easy to update.

  4. Human in the loop. Require human review for accuracy, bias, and copyright before sending or publishing. The author of record is responsible for the final.

  5. Transparency. Ask staff and freelancers to note AI assistance in internal docs and client deliverables when material. That consistency builds trust.

  6. Security and access. Pay for team tools and give your team access. Turn off model training on your content when you can.

  7. Compliance and IP. Write down your rules on copyrighted inputs, training data, and claims. Cite sources.

  8. Incident path. Offer one place to report data leaks, harmful hallucinations, or questions. Respond fast and log outcomes.

  9. Training and change. Provide short role-based training with examples. Give templates so adoption feels safe and useful. Ask various team members to lead monthly AI training sessions about their preferred tool, so everyone on the team will keep learning.

  10. Review cadence. Schedule quarterly AI policy reviews. Update as tools, regulations, and your use cases change.


Coach your team on safe AI usage

People want to do the right thing. Give them simple habits and language so they can. Keep this guidance near the tools they use every day.

Teach the red lines:


  • No private client data in public tools. 

  • Use approved tools for anything sensitive. 

  • Model good prompts. 

  • Show how to ask for structure, not secrets. 

  • Require verification. 

  • For anything factual or regulated, compare outputs side by side with a trusted source before you ship. 

  • Normalize transparency. 

  • Make a short note like 'AI-assisted draft' acceptable in internal docs and client previews.


As a leader, set the tone with clarity and confidence. Model and lead this behavior and expect your team to follow. Small teams follow leaders who draw clear lines and invite questions.


Take action today to create an AI policy


  • Meet with your team. ASAP.

  • Ask them how they use AI right now and ask for input in creating an AI policy.

  • Draft the one-page AI policy the following week.

  • Set up a time to meet with your team to review the new policy and train them for 60 minutes. 


Get. It. Done.

Reach out to me if you have questions!

More In Media is a division of Highland Strategic, LLC
PO Box 523, Pittsfield, VT 05762
617-763-1655
Copyright Ⓒ | More In Media | 2025
bottom of page