Close Menu
    Facebook X (Twitter) Instagram
    TRENDING :
    • Social media’s big tobacco moment is just a first step
    • Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes
    • Google, TikTok and Meta could be taxed by Australia to fund its newsrooms
    • MacKenzie Scott says we underestimate the impact of small acts of kindness. Science agrees
    • Trump says Iran ‘better get smart soon’ as economies deal with skyrocketing energy prices
    • A key weapon in America’s ‘Golden Dome’ defense shield is taking shape
    • How F1 is revving up its U.S. takeover at the Miami Grand Prix
    • Why the hardest part of building the future is letting go of the past
    Compatriot Chronicle
    • Home
    • US Politics
    • World Politics
    • Economy
    • Business
    • Headline News
    Compatriot Chronicle
    Home»Business»How one big city is letting AI agents in
    Business

    How one big city is letting AI agents in

    March 9, 202610 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Follow Us
    Google News Flipboard
    Share
    Facebook Twitter LinkedIn Pinterest Email


    The next wave of AI will be defined by agentic systems that can take actions: query databases, navigate portals, retrieve records, and increasingly interact with public digital infrastructure at scale.

    That shift is already showing up as traffic hitting government sites and services is becoming machine traffic. Some of it is benign (search and discovery). Some of it is ambiguous (scraping and automated browsing). And some of it could become actively harmful if agents can reserve scarce services, submit fraudulent requests, or generate volume that overwhelms public systems. 

    The problem is that the government’s current interfaces were not designed for agent-to-government interactions, and the default state of the world has become improvisation: agents “figure it out” by scraping pages and guessing based on previous learning..

    This is where Boston’s work becomes instructive. Rather than treating agents as something to block wholesale, or something to embrace without guardrails, Boston is experimenting with a middle path: build a governed, secure, and reliable layer that mediates how AI agent systems interact with government resources. 

    Boston CIO, Santi Garces [Photo: City of Boston]

    In a recent interview, Boston’s CIO, Santi Garces, described why the city is investing in the Model Context Protocol (MCP) as that layer; why they’re starting with open data as a low-risk proving ground; how they’re improving reliability by pushing computation into the data portal itself; and what it would take for MCP-like infrastructure to become replicable digital public infrastructure that other cities can deploy.

    Can you explain MCP, and why city governments should care?

    MCP stands for Model Context Protocol, and it’s relatively recent. Anthropic, the company behind Claude, launched MCP servers about a year ago. Why it matters is that it provides a way for large language models to interface with the kinds of resources we have in government. Concretely, it’s a way to connect LLMs to APIs and other programmatic systems, for example, allowing an AI assistant to retrieve transit updates or submit a service request through official city systems. We think it will be a new layer that serves as an intermediary between the government’s digital infrastructure and these models.

    This is exciting for Boston because the world is moving fast, and we’re already seeing websites and services being activated or consumed by agents. MCP servers can serve as a layer through which the government can add governance and control.

    Mechanically, an MCP server creates a set of tools. You describe, in plain language, when a tool should be used. Then you define what inputs need to be extracted from a natural-language request and how that translates into deterministic programmatic access to a resource. LLMs can be random; MCP is part of the pathway to make certain interactions more reliable and secure.

    The dream is that cities invest in this infrastructure and point to different models to interact with a city’s MCP layer, ensuring it’s reliable, secure, and provides a better experience for people using agentic systems to interact with the government. A lot has to be true for that future, but we’re very excited about it.

    What normally breaks when people rely on “just the chatbot” and prompting, what problem is MCP solving?

    Take our first MCP server: open data. If you ask Claude or ChatGPT or Gemini something like, “How many restaurants are there in Boston,” those models will answer using either (1) their training data, which is probably out of date, or (2) they’ll make something up. The risk of inaccuracy or hallucination is high.

    It might do better if it can browse the web, but then you’re relying on it to find the right source, and we know a lot of information online is outdated or inaccurate. It might pull from an old report or an article from five years ago.

    What we’ve been able to do with Open Context, our first MCP instance tied to Boston’s open data portal, is create a direct link between the portal and these AI tools. MCP servers are interoperable, so it doesn’t matter which AI tool you’re using.

    If you ask an AI tool connected to this MCP server, “How many restaurants are there in Boston given the open data portal,” it automatically searches Boston’s portal, finds the right dataset, and generates a SQL query against that dataset. It queries live data reliably and returns an answer grounded in the city’s actual data infrastructure.

    We spend a lot of money and time building data infrastructure that many people don’t use because it’s inconvenient. Most people don’t know SQL, and even knowing which dataset is right is hard. These tools bridge that gap, getting you to the right answer while avoiding many of the pitfalls in AI tools today.

    How did you make this more trustworthy and what did the development process look like?

    We started in the fall of 2025 with students from Northeastern’s AI for Impact program at the Burnes Center. We’ve been rolling out a tool to Boston city employees called AI Launchpad, which provides access to LLMs, but we wanted it to be more useful.

    We looked at how employees use AI tools, drawing on our experience and survey data. Data analysis is a common use case. But to analyze, people have to download data, paste it into a context, and go through a lot of steps. So our starting motivation was: how do we make those workflows easier, more convenient, and more reliable?

    Around that time, I was at an AI retreat in Boston and spoke with Romesh Raskar at MIT about what the agentic web will look like and the need to build an open version of it. That weekend turned into action quickly. Saturday at the retreat, then Sunday speaking at MIT, challenging people to build better agentic experiences for Boston. Then, on Monday, we said, let’s try to build an MCP server and connect it to AI Launchpad.

    Because we had brilliant students, by October, we had a prototype that connected to the open data portal. Since November and December, we’ve been iterating to make it more reliable. It did a good job finding datasets, but it wasn’t as strong at analyzing large datasets—good for small samples, less good at scale.

    One innovation was to push more computation into the open data portal itself. Most data portals can run queries. So we’re using the portal to do more of the analytical work, which improves reliability and also makes the overall interaction more efficient and cost-effective.

    You’ve also talked about this as a replicable layer of digital public infrastructure. What else do cities need to be able to implement this?

    This is why we’re excited. With emerging technology, it’s possible we’ll be using a different acronym in six months, but right now, MCP looks like a real path to solve this.

    We think MCP is a component of digital public infrastructure and should be tied to digital public infrastructure (DPI). The agentic web is only helpful if it creates reliable, secure intermediation that serves real human beings. AI could help if someone is busy, doesn’t speak English, or has a disability. There are many reasons this could matter for access. But without the right infrastructure, the experience becomes less reliable.

    The MCP pattern is appealing because it lets you leverage existing DPI components—identity, API exposures, payment APIs—by creating a middle layer between what an AI system “sees” and the underlying infrastructure the government already has, in a way that can be made more reliable.

    We’re starting with open data because it’s low risk and already public. But it could evolve to intermediation around service requests and other interactions. We believe the government should have the capability to build and steward this. But we can also imagine vendors incorporating this type of interface into the products they sell to governments.

    Let’s talk about security. What threats feel most realistic with agent systems, and how does MCP help?

    One concern is that our APIs are not always well secured. There are agentic browsers and tools that make it easy to automate interactions. And we’re seeing more and more traffic to boston.gov that isn’t from people, it’s from AI systems scraping and “deep searching.”

    It’s not hard to imagine AI tools also requesting services. A major risk is when an AI tool makes requests that aren’t tied to a real human need. You could have fraudulent requests or actors generating scarcity by consuming limited government resources and potentially reselling access, similar to ticket scalpers at concerts.

    Another risk is that without a controlled layer, it’s harder to secure and monitor the traffic between AI systems and government systems.

    What excites us about MCP servers is that this middleware could make it easier to block unauthorized inbound agentic requests with cybersecurity tools, while still enabling legitimate uses. The idea is: people who need services use an authorized channel that the government controls, can associate with identity, and can monitor and secure end-to-end.

    Without that middleware, government faces an uncomfortable choice: block agentic interaction entirely, or leave it open in the wild. MCP offers a middle ground: governance for agentic interactions.

    Are there things you’re intentionally not exposing MCP to right now?

    We’re starting with open data. Our AI policies in Boston, rolled out a couple of months ago, state that we’re not using AI to process information that could affect people’s lives, property, or civil liberties because of reliability issues and intrinsic, complex biases.

    So, for now, those are categories we avoid. It’s not just “a human in the loop.” We know AI intermediation can create adverse effects that are hard to detect and remediate.

    At the same time, we work closely with the disability community and with people who face language access issues. Government is hard to access for people who need it most. And those are often the same people least likely to have private access to LLMs, paid subscriptions, reliable internet, and personal devices.

    If you had a magic wand, what’s the biggest blocker you’d remove?

    There are technical gaps because MCP is new. Early on, MCP servers didn’t support some authentication pieces natively; we had to add frameworks to secure them. The ecosystem is changing fast.

    But the biggest thing is discoverability and ease of use. We need to get to a point where using MCP infrastructure is as easy as pointing an LLM to a URL. With websites, you type the name or use search. We need that for MCP: trivial discovery, trivial access, effectively zero barrier to entry. We’ve made it easier, but there’s still too much technical legwork.

    For another city that wants to move in this direction, what’s the action they should take now?

    Good metadata management is essential. LLMs consume data, but they don’t understand what it is without good descriptions and context. So it starts with good data governance.

    We intend to share this work. We’re proud of it, and it’s thanks to collaboration with the GovLab and the Burnes Center that we’ve been able to move quickly. We intend to make Open Context an open-source project so others can replicate it.

    The MCP server itself doesn’t cost a lot to run. Our goal is to make it as simple as deploying a package into whatever public cloud a city uses. The rest of the puzzle, how this ties into broader services, is something every city will have to solve, and we’re solving it in Boston, too.

    But importantly, data becomes useful only when people use it. Data quality improved when we started publishing open data. We think governance and quality will improve further when more people use open data. And we’re hoping GenAI makes it easier for people to use open data, so we can solve problems collectively.

    —

    A version of this interview was originally published at Reboot Democracy.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Social media’s big tobacco moment is just a first step

    April 29, 2026

    Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes

    April 29, 2026

    Google, TikTok and Meta could be taxed by Australia to fund its newsrooms

    April 29, 2026
    Top News

    Creating an Effective Client Feedback Survey

    By Staff WriterDecember 28, 2025

    Creating an effective client feedback survey is vital for comprehending your customers’ needs and improving…

    ‘Leverage the local’: The fashion trend that explains why everyone around you is channeling their inner tourist

    March 29, 2026

    This free web tool makes everything way easier to read

    March 28, 2026

    How Elon Musk plans to build his own chip empire in Texas

    March 23, 2026
    Top Trending

    Social media’s big tobacco moment is just a first step

    By Staff WriterApril 29, 2026

    Many commentators have called March’s California jury verdict, finding Meta and Google…

    Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes

    By Staff WriterApril 29, 2026

    California-based Ghirardelli Chocolate Company has voluntarily recalled 13 of its powdered beverage…

    Google, TikTok and Meta could be taxed by Australia to fund its newsrooms

    By Staff WriterApril 29, 2026

    Australia has proposed taxing digital giants Meta, Google and TikTok on a…

    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    About us

    The Populist Bulletin serves as a beacon for the populist movement, which champions the interests of ordinary citizens over the agendas of the powerful and entrenched elitists. Rooted in the belief that the voices of everyday workers, families, and communities are often drowned out by powerful people and institutions, it delivers straightforward, unfiltered, compelling, relatable stories that resonate with the values of the American public.

    The Populist Bulletin was founded with a fervent commitment to inform, inspire, empower and spark meaningful conversations about the economy, business, politics, inequality, government accountability and overreach, globalization, and the preservation of American cultural heritage.

    The site offers a dynamic mix of investigative journalism, opinion editorials, and viral content that amplify populist sentiments and deliver stories that echo the concerns of everyday Americans while boldly challenging mainstream narratives that serve the privileged few.

    Top Picks

    Social media’s big tobacco moment is just a first step

    April 29, 2026

    Ghirardelli Chocolate products recalled over Salmonella fears. Avoid this list of 13 beverage mixes

    April 29, 2026

    Google, TikTok and Meta could be taxed by Australia to fund its newsrooms

    April 29, 2026
    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    Copyright © 2025 Populist Bulletin. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.