Taming AI: Can America Lead Before It’s Too Late?

Shawn Collins is a policy expert who served as the first Executive Director of the Massachusetts Cannabis Control Commission, building its regulatory framework from scratch. He now heads The Homegrown Consulting Group (THC Group), a consultancy guiding industries—cannabis, alcohol, health care, and artificial intelligence—through complex compliance challenges. Tim Harvey is an entrepreneur and co-founder of FlyDragon, a firm using AI to streamline content generation for real estate agents, boosting efficiency and impact. A THC Group client, FlyDragon combines Harvey’s innovative approach with Collins’ regulatory expertise to navigate AI’s evolving landscape.

Late last year, a New York grandmother answered a call that pierced her quiet evening. On the line, her grandson’s voice trembled with panic, pleading for $10,000 to escape the aftermath of a car wreck he claimed had left him in dire straits. Driven by love and instinct, she wired the money without hesitation, only to discover hours later it was a cruel ruse—an artificial intelligence (AI) so adept at cloning his voice it hijacked her trust entirely. She wasn’t an outlier. According to the Federal Trade Commission, over 50,000 such scams bled American consumers of $108 million in 2024 alone. This wasn’t a glitch but a stark preview of AI’s dual nature: a technology that dazzles with promise yet wounds with surgical precision.

Here in 2025, AI’s reach is vast and transformative. At FlyDragon, for instance, it’s disrupting the SEO agency model in real estate by crafting content faster and cheaper than traditional methods, streamlining a market long bogged down by inefficiency. Globally, it’s touching 400 million lives through tools like ChatGPT, reshaping how we work, think, and connect—often with costs 150 times lower than just two years ago. It’s diagnosing cancer with a precision that humbles seasoned physicians, offering hope where once there was uncertainty. Yet its shadow looms just as large: fraud preys on the vulnerable, deep fakes erode the foundations of truth, and authoritarian rivals like China and Russia wield it as a geopolitical weapon in a contest already underway. America stands at a crossroads, swept up in AI’s relentless advance, racing toward a future we must govern—not merely chase—with a framework to harness its brilliance while curbing its chaos. The question is urgent: can we craft that framework before the perils outpace the promise?

The question is urgent: can we craft that framework before the perils outpace the promise?

Biden’s Guardrails: A Foundation Undone

In October 2023, the Biden Administration confronted AI’s meteoric rise with an executive order titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” It was a sprawling bid to tether a force already slipping beyond our grasp, mandating safety reports for systems running massive calculations - think models that could power autonomous weapons or sway elections. It pushed for watermarks to flag AI-generated content, a bulwark against deep fakes, and sought to root out bias while bolstering privacy protections. The order acknowledged a chilling reality: in an era where trust is currency, algorithms can destabilize societies as swiftly as missiles, and often with less warning.

To enforce this, it birthed the U.S. AI Safety Institute under the National Institute of Standards and Technology (NIST) and invoked the Defense Production Act, compelling labs to disclose their development plans. This was a clear signal that national security was at stake. Agencies scrambled to study AI’s ripple effects, from job markets reshaped by automation to justice systems warped by biased algorithms, building on voluntary oversight efforts OpenAI had pioneered earlier that year. It wasn’t perfect. Tech firms bristled at the bureaucratic burden, decrying delays to their breakneck pace. Conservative critics and industry voices saw it as a straitjacket on American ingenuity, a handicap in the race against China’s state-driven AI ambitions. Their concerns about overreach weren’t baseless, either. Regulation can stifle as much as it steadies. But the order at least stood as a recognition that AI’s unchecked growth risked turning promise into peril. It offered a foundation; a starting point we could have built upon with care.

Trump’s Torch: Unleashing Without a Blueprint

That foundation crumbled in January 2025 when President Trump issued his own executive order, “Removing Barriers to American Leadership in Artificial Intelligence.” With characteristic bravado, he dismantled Biden’s framework, arguing its rules weighed America down in a high-stakes contest with China, whose 2030 AI goals loom as a benchmark for global dominance. Trump’s vision was bold and brash: unshackle American ingenuity, fuel a tech surge, and outpace Beijing. There’s a kernel of truth here, too. Building a regulatory scheme is sluggish and would slow the sprint. Innovation has indeed surged since, from Silicon Valley labs to innovative ventures like FlyDragon. But it’s noticeably light on specifics, which creates some inherent risk.

The fallout was immediate and palpable. Scams multiply, deep fakes proliferate, and the Office of Science and Technology Policy (OSTP) now sifts through a flood of feedback to draft a new AI Action Plan, which they opened for input on March 1. Without a clear strategy, we’re betting on chaos to breed brilliance—a gamble that could leave us exposed. A tiered approach, grading AI systems by their risk, could bridge this gap: strict oversight for high-stakes tools, freedom for lower-impact businesses like FlyDragon. It’s a policy fix that balances growth with security, a lesson Trump’s order sidesteps but we can’t afford to ignore.

The Labs Step Up—With Limits

The labs leapt at OSTP’s invitation, each offering a vision to steer this runaway train. On March 13, OpenAI led with a plan rooted in voluntary partnerships with federal agencies, export curbs to hinder rivals like China, and a massive tech buildout to showcase our American might. With 400 million ChatGPT users as proof of AI’s reach, they propose a collegial handshake—noble, but fragile when profits or pride collide. Google charted a lighter path, leaning on its long-standing AI principles: open-source collaboration, minimal rules, just enough safety to keep innovators stateside. It’s a flexible stance, but it risks bending too far under pressure.

All sing innovation’s praises, but the tension lies in just how long their leash should be.

Microsoft, weaving its Responsible AI ethos, pitched a global framework—transparency, accountability, and export controls to shield the West from rogue actors (read: China and Russia). It’s a broader lens, yet still leans on cooperation over compulsion. Anthropic, ever cautious, took a firmer line, calling for mandatory testing, transparency for high-risk systems, and research to ensure AI doesn’t spiral out of control—a stance true to its mission. All sing innovation’s praises, but the tension lies in just how long their leash should be. OpenAI bets on goodwill despite rivalries; Anthropic warns of chaos where self-interest reigns. Google and Microsoft straddle the divide, offering flexibility with a nod to global stakes. None fully embraces binding rules—tiered risks, strict liability—that history, like Wall Street’s 2008 meltdown, shows self-governance can’t sustain when ambition outruns restraint.

AI’s Reach: Promise and Peril Collide

AI’s impact echoes the internet’s ‘90s boom, a tidal wave reshaping our lives with breathtaking speed. It’s catching cancer shadows before physicians blink, powering FlyDragon to match agents and homebuyers at a fraction of past costs—a practical win that’s scaled our reach and revenue. Worldwide, it’s touching 400 million people with tools now 150 times cheaper than two years ago, from boardrooms plotting sales with uncanny accuracy to classrooms where kids lean on digital tutors to crack algebra. It’s woven into our fabric, sometimes answering our calls with a voice so human you’d never suspect otherwise.

But every advance casts a shadow, and AI’s is darker than most. That grandmother’s loss was one thread in a $108 million fraud plague in 2024, a number the FTC tracks with growing alarm. In 2022, a deep fake of Ukraine’s President Volodymyr Zelenskyy, urging his troops to surrender to Russia, captured millions of views before X could flag it—detection tools lagged helplessly behind. Bias seeps into systems we trust: hiring algorithms sideline women, loan models echo redlining’s scars by denying credit to Black neighborhoods. AI’s hunger for data, woven into dated and fragile networks, risks breaches that could dwarf Equifax’s 2017 collapse, exposing millions in a heartbeat. These threats aren’t looming—they’re here, raw and unbridled, demanding targeted traps: watermarks for fakes, tougher laws for scams, and shields for privacy. We can’t wait for the wreckage to force our hand.

China’s Shadow: A Geopolitical Chess Match

This isn’t America’s game alone—it’s a global stage where China plays by starkly different rules. Their 2030 AI plan is a state-driven juggernaut, with DeepSeek as its spearhead—a platform so entwined with the Chinese Communist Party (CCP) it bends to Beijing’s will. OpenAI doesn’t mince words: DeepSeek’s models, built on data pilfered from Western firms, now rival GPT-4 in language mastery. In layman’s terms, that means it is capable of spinning disinformation or wreaking economic havoc with chilling ease. Russia adds fuel to the fire, deploying AI in 2024 to meddle in our elections with a precision that stunned even seasoned analysts. This is a stark reminder: AI doubles as a weapon, and we’re not the only ones armed.

America’s response has faltered, too. Over 780 bills clog state legislatures, each a local stab at control, most often driven more by politics than strategy. It’s a patchwork that risks hobbling us just as China and Russia charge ahead. The Commerce Department tightened export controls in 2024, choking DeepSeek’s access to cutting-edge chips—a solid move, but not enough. We need global hardball: G20 norms on ethics and data alongside alliances with tech-booming nations like Japan and India. Labs can’t pull this off alone, chasing profit over geopolitics. This is a fight for leverage—rules we write, not react to.

The Cost of Standing Still

Without a grip, we’re defenseless against a tide already crashing at our shores. That grandmother’s scam was an opening salvo—consumer protections we expect erode precisely when they’re vital. History offers brutal lessons: the internet’s lawless ‘90s bred scams that drained wallets until regulations caught up; 2008’s financial greed raced past guardrails, leaving us taxpayers to sweep up the shards. Policy often waits for carnage to spur action, but AI’s pace won’t indulge that delay. OpenAI’s rosy pitch for technological freedom, buoyed by profit, ignores 2008’s truth: self-governance buckled when ambition outstripped prudence, and the wreckage lingered. Frighteningly, AI’s stakes are even higher—think faked elections and fabricated wars. That is what makes Google and Microsoft’s hedge so frustrating. They are offering middle-ground flexibility when firm choices are due.

Nowhere is this clearer than with Gen AI, the kids growing up with this tech as their birthright. From playgrounds to paychecks, AI shapes their world. Yet we risk leaving them exposed—banning it won’t work when it’s already in their hands. We owe them guidance, not a shrug. This is a lesson etched in the ‘90s web traps we dodged only after scams forced us to teach smarts over prohibition. Five locks can cage this beast, channeling its power while shielding us from its wilder impulses.

We propose five locks—a comprehensive, policy-driven set of solutions to secure its promise and blunt its harm.

Five Locks to Cage the Beast

So how do we govern AI without stifling its potential? We propose five locks—a comprehensive, policy-driven set of solutions to secure its promise and blunt its harm. Drawing from the labs’ pitches, we ground them in clear-eyed resolve while rejecting their faith in self-policing for rules with teeth.

First, Tier the Risks: Not all AI demands the same scrutiny. High-stakes systems—like autonomous drones navigating crowded skies or algorithms sentencing in courtrooms—require rigorous testing to catch flaws before they crash or convict unfairly. Human oversight with veto power, paired with transparency to demystify the black box, feels non-negotiable when lives or liberties hang in the balance. By contrast, low-risk tools—like FlyDragon’s real estate agent content generation or playlist curators shaping your commute—can operate with lighter oversight, fostering creativity where the stakes are modest. Europe’s AI Act sketches a tiered model, classifying systems by risk, but we’d carve it leaner and sharper. It should be tailored to AI’s breakneck pace and America’s entrepreneurial spirit—firm guardrails where they matter, freedom where they don’t.

Second, Trap the Threats: Vague promises won’t stop AI’s bleeding edges—precise countermeasures will. Deep fakes, like that Zelenskyy video that nearly tilted a war, need digital markers—hidden tags that Adobe’s already testing—to flag fakes before they spread like wildfire. Scams, like that $108 million voice-cloning plague, call for a tougher Computer Fraud and Abuse Act. We should update our 1980s laws with steeper fines and jail time, which is a move the FTC’s pushed since 2023 to match the crime’s scale. Privacy, hemorrhaging as AI mines our personal data, demands a federal shield stronger than California’s Consumer Privacy Act. We need mandatory opt-ins and hefty breach penalties to protect where state patchwork fails. Creators, meanwhile, deserve royalties or legal protections for work AI repurposes, a copyright tweak musicians like Billie Eilish have demanded in lawsuits spiking since 2024. These aren’t band-aids but targeted traps for specific threats.

Third, Play Global Hardball: AI slips borders—from Shenzhen to Worcester in an instant—and half-measures won’t counter China’s heft. Export controls, tightened in 2024, choke DeepSeek’s chip access, a domestic diplomatic win we should double down on. Diplomacy matters more: forging G20 norms—agreements on AI ethics, data-sharing—could rally allies like Japan and India, whose tech sectors boom, to balance Beijing’s state-driven push. The State Department could lead this by 2026 if prioritized now. Alliances count too: joint AI research pacts with Indo-Pacific nations could outpace DeepSeek’s gains—gains, remember, that are built on stolen IP. This isn’t isolation but leverage, ensuring America shapes the rules, not scrambles to match them someday.

Fourth, Sharpen the Tools: Outpacing AI’s risks means building better defenses today, not tomorrow. DARPA’s $2 billion AI Next fund, launched in 2023, could forge bias-killers—models trained to scrub prejudice from hiring or lending, as MIT’s 2024 trials did with 85% success. Fake-spotters, flagging deep fakes faster than human eyes, could’ve spared that grandmother her loss and countless others worse. Tax credits can stretch this muscle, incentivizing firms like Nvidia or startups—from Silicon Valley and possibly even to Worcester—to partner with DARPA, pooling resources to match AI’s relentless speed. This builds on existing efforts, escalating defense into offense against AI’s darker instincts.

Fifth, Prepare the Next Generation: Gen AI—kids raised with this tech as their birthright—need more than warnings; they need wisdom to thrive. Flooding airwaves with PSAs, like the FCC’s 2024 robocall campaign for seniors, can train them early to distrust too-perfect videos or calls. Tweaking 529 college savings plans to fund AI skills—coding bootcamps, ethics courses—equips them for jobs AI won’t steal, a model Texas has piloted with $50 million in grants since 2023. This echoes how we taught ‘90s kids to dodge web scams, rooting safety in smarts, not bans. Prohibition falters when the tech is already in their pockets. It’s an investment in a generation that’ll either master AI or be mastered by it.

Self-governance remains a fairy tale that labs spin for profit—2008’s greed-driven collapse exposed its flaws when guardrails failed. Mandatory audits by an expanded NIST, with teeth to enforce compliance, and strict liability—say, a $1 billion fine for a deep-fake election hack—forge accountability that handshakes simply can’t touch. Try as they might, states can’t lead us on this; 780 bills prove local agendas breed chaos, not coherence. This demands a national strategy.

The Way Forward

This is our defining moment. The ‘90s scams drained wallets; 2008’s greed torched homes—AI’s unchecked run could fracture trust, elections, even peace itself. China and Russia press their edge, largely undeterred. Our children, Gen AI, deserve a world they can shape, not salvage. Five locks—tiered risks, threat traps, global strategy, sharper tools, and prepared youth—offer the path. Companies like FlyDragon will thrive when AI’s guided, not wild; that grandmother’s loss fades when threats are caged. We’ve governed fire, steel, and code before—AI’s no different. We act now, proving we can wield power with purpose, or cede control to a force too potent to drift unchecked.


Sources, Resources, and Suggested Reading:

Sources

Additional Resources

Shawn Collins

Shawn Collins is one of the country’s foremost experts in cannabis policy. He is sought after to opine and consult on not just policy creation and development, but program implementation as well. He is widely recognized for his creative mind as well as his thoughtful and successful leadership of both startup and bureaucratic organizations. In addition to cannabis, he has a well-documented expertise in health care and complex financial matters as well.

Shawn was unanimously appointed as the inaugural Executive Director of the Massachusetts Cannabis Control Commission in 2017. In that role, he helped establish Massachusetts as a model for the implementation of safe, effective, and equitable cannabis policy, while simultaneously building out and overseeing the operations of the East Coast’s first adult-use marijuana regulatory agency.

Under Shawn’s leadership, Massachusetts’ adult-use Marijuana Retailers successfully opened in 2018 with a fully regulated supply chain unparalleled by their peers, complete with quality control testing and seed-to-sale tracking. Since then, the legal marketplace has grown at a rapid pace and generated more than $5 billion in revenue across more than 300 retail stores, including $1.56 billion in 2023 alone. He also oversaw the successful migration and integration of the Medical Use of Marijuana Program from the stewardship of the Department of Public Health to the Cannabis Control Commission in 2018. The program has since more than doubled in size and continues to support nearly 100,000 patients due to thoughtful programmatic and regulatory enhancements.

Shawn is an original founder of the Cannabis Regulators Association and also helped formalize networks that provide policymakers with unbiased information from the front lines of cannabis legalization, even as federal prohibition persists. At the height of the COVID-19 pandemic, Collins was recognized by Boston Magazine as one of Boston’s 100 most influential people for his work to shape the emerging cannabis industry in Massachusetts.

Before joining the Commission, Shawn served as Assistant Treasurer and Director of Policy and Legislative Affairs to Treasurer Deborah B. Goldberg and Chief of Staff and General Counsel to former Sen. Richard T. Moore (D-Uxbridge). He currently lives in Webster, Massachusetts with his growing family. Shawn is a graduate of Suffolk University and Suffolk University Law School, and is admitted to practice law in Massachusetts.

Shawn has since founded THC Group in order to leverage his experience on behalf of clients, and to do so with a personalized approach.

https://homegrown-group.com
Next
Next

Tariffs in 2025: The Hidden Tax You’re Already Paying