MCP Server for Proprietary SQL Databases [Deep Dive]
Most proprietary SQL estates are not blocked by the database itself. They are blocked by integration shape. Legacy drivers, custom authentication, vendor-specific SQL dialects, and strict data controls make direct AI access risky. MCP solves the protocol side: it gives AI clients a standard way to discover tools, resources, and prompts. Your job is to build the narrow adapter between that protocol and your database.
In this tutorial, you will build a production-minded MCP server for a proprietary SQL system using a read-only query path, schema introspection, and defensive limits. The implementation below uses TypeScript and the official MCP SDK shape, but the architecture applies equally if your vendor driver is Java, .NET, or C-backed behind a service wrapper.
Key takeaway
Do not expose a general SQL console to the model. Expose a constrained adapter: publish schema as a resource, provide one or two tightly scoped read-only tools, and enforce limits in the server instead of trusting prompts.
Prerequisites
You should have:
- Node.js 20+ and npm
- Access to your proprietary database driver or client library
- A low-privilege database account with read-only rights
- A sample schema or test database
- A local MCP-compatible client for inspection and testing
Before you start: decide what the model is allowed to do. For most teams, that means table discovery, schema inspection, and parameterized reporting queries only. If you need sample data for evaluations, run it through TechBytes' Data Masking Tool before storing fixtures or logs.
1. Define the contract before writing code
Start by designing the server surface area, not the transport. A good first version usually has one resource and two tools:
- Resource: `schema://catalog` returns tables, columns, and types.
- Tool: `describe_table` returns detailed metadata for one table.
- Tool: `run_report_query` executes a parameterized read-only SELECT with row limits.
This keeps the model grounded. Instead of guessing whether `customers` has `created_at` or `signup_date`, the client can read the resource first. That single design choice reduces query failures more than prompt tuning usually does.
Create the project skeleton:
mkdir proprietary-sql-mcp
cd proprietary-sql-mcp
npm init -y
npm install @modelcontextprotocol/sdk zod proprietary-sql-driver
npm install -D typescript tsx @types/node
npx tsc --initThe placeholder package `proprietary-sql-driver` represents your vendor library. Swap it for the actual client package or a thin internal wrapper if the real driver is not directly usable from Node.js.
2. Build a vendor-specific database adapter
Keep database logic out of the MCP handlers. Your adapter should normalize connection setup, schema discovery, parameter binding, and safety checks. That keeps the protocol layer thin and makes the code testable.
// src/dbAdapter.ts
import VendorClient from 'proprietary-sql-driver';
export class DbAdapter {
private client;
constructor(private connectionString: string) {
this.client = new VendorClient({ connectionString });
}
async connect() {
await this.client.connect();
}
async listTables(schema = 'PUBLIC') {
const sql = `
SELECT table_name
FROM information_schema.tables
WHERE table_schema = ?
ORDER BY table_name
`;
return this.client.query(sql, [schema]);
}
async describeTable(tableName: string) {
const sql = `
SELECT column_name, data_type, is_nullable
FROM information_schema.columns
WHERE table_name = ?
ORDER BY ordinal_position
`;
return this.client.query(sql, [tableName]);
}
async runReadOnlyQuery(sql: string, params = [], maxRows = 200) {
if (!/^\s*select\b/i.test(sql)) {
throw new Error('Only SELECT statements are allowed');
}
if (/\b(insert|update|delete|drop|alter|truncate|grant|revoke|merge)\b/i.test(sql)) {
throw new Error('Mutating statements are blocked');
}
const limitedSql = `${sql.trim()} LIMIT ${maxRows}`;
return this.client.query(limitedSql, params);
}
}Three design choices matter here. First, the adapter enforces read-only behavior. Second, it caps rows on the server side. Third, it centralizes vendor SQL quirks in one file, which is critical when your proprietary platform uses nonstandard parameter syntax or catalog tables.
3. Expose MCP tools and resources
Now wire the adapter into the MCP server. For local development and desktop clients, stdio is the fastest path. It avoids HTTP deployment work while you validate the contract.
// src/server.ts
import { Server } from '@modelcontextprotocol/sdk/server';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio';
import {
ListResourcesRequestSchema,
ReadResourceRequestSchema,
ListToolsRequestSchema,
CallToolRequestSchema
} from '@modelcontextprotocol/sdk/types';
import { DbAdapter } from './dbAdapter';
const db = new DbAdapter(process.env.DB_CONNECTION_STRING || '');
await db.connect();
const server = new Server(
{ name: 'proprietary-sql-mcp', version: '1.0.0' },
{
capabilities: {
resources: {},
tools: {}
}
}
);
server.setRequestHandler(ListResourcesRequestSchema, async () => ({
resources: [
{
uri: 'schema://catalog',
name: 'Database Schema Catalog',
mimeType: 'application/json'
}
]
}));
server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
if (request.params.uri !== 'schema://catalog') {
throw new Error('Unknown resource');
}
const tables = await db.listTables();
return {
contents: [
{
uri: 'schema://catalog',
mimeType: 'application/json',
text: JSON.stringify(tables, null, 2)
}
]
};
});
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
{
name: 'describe_table',
description: 'Return columns and types for one table',
inputSchema: {
type: 'object',
properties: {
tableName: { type: 'string' }
},
required: ['tableName']
}
},
{
name: 'run_report_query',
description: 'Execute a parameterized read-only report query',
inputSchema: {
type: 'object',
properties: {
sql: { type: 'string' },
params: { type: 'array', items: {} }
},
required: ['sql']
}
}
]
}));
server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name === 'describe_table') {
const rows = await db.describeTable(request.params.arguments.tableName);
return { content: [{ type: 'text', text: JSON.stringify(rows, null, 2) }] };
}
if (request.params.name === 'run_report_query') {
const args = request.params.arguments || {};
const rows = await db.runReadOnlyQuery(args.sql, args.params || []);
return { content: [{ type: 'text', text: JSON.stringify(rows, null, 2) }] };
}
throw new Error('Unknown tool');
});
const transport = new StdioServerTransport();
await server.connect(transport);That is enough for a working server. If your client stack prefers remote access, you can move to Streamable HTTP later without changing the adapter or tool semantics.
4. Run the server with environment-backed credentials
Do not hardcode credentials in code or client configs. Keep secrets in the environment and run the server as a dedicated process.
export DB_CONNECTION_STRING='vendor://readonly-user:secret@db-host:5432/appdb'
npx tsx src/server.tsIf you plan to share snippets with teammates, pass them through a formatter before publishing docs or examples. TechBytes' Code Formatter is useful here because MCP payload examples become hard to review once indentation drifts.
Verification and expected output
Verification should prove three things: the resource lists correctly, the schema resource reads correctly, and the query tool rejects unsafe SQL.
- Connect your MCP client to the server process.
- List available resources. You should see `schema://catalog`.
- Call `describe_table` with a known table.
- Call `run_report_query` with a safe SELECT.
- Attempt a blocked statement such as `DELETE FROM customers` and confirm the error.
{
"tool": "describe_table",
"arguments": {
"tableName": "customers"
}
}Expected output should look roughly like this:
[
{
"column_name": "customer_id",
"data_type": "INTEGER",
"is_nullable": "NO"
},
{
"column_name": "created_at",
"data_type": "TIMESTAMP",
"is_nullable": "NO"
}
]A blocked mutation should return a clear server-side error such as: Only SELECT statements are allowed.
Troubleshooting: top 3 issues
1. The model keeps querying tables that do not exist
Your schema resource is too thin. Include table names, columns, and basic type metadata. Models perform better when they can ground themselves in a current catalog instead of inferring from business language.
2. Queries work in a SQL client but fail through MCP
Your proprietary driver may require different placeholder syntax, session setup, or schema qualification. Fix that in the adapter layer, not in the prompt. The MCP server should normalize those differences.
3. Sensitive rows show up in logs or examples
You are logging raw payloads or returning unrestricted samples. Remove row-level logging, mask fixture data, and keep result sets small. Production MCP servers should log query class, latency, and row counts, not full result bodies.
What's next
Once the read-only path is stable, the next improvement is usually not more tools. It is better governance. Add per-tool authorization, schema allowlists, tenant scoping, and audit events. After that, consider richer resources such as data dictionaries, approved joins, and canonical business metrics so the model works from curated context instead of open-ended SQL.
If your team needs higher throughput, move the same contract behind HTTP, add connection pooling, and cache schema resources aggressively. The protocol surface can stay small. The winning pattern for proprietary databases in 2026 is still the same: a thin MCP layer, a strict adapter, and explicit data boundaries.
Get Engineering Deep-Dives in Your Inbox
Weekly breakdowns of architecture, security, and developer tooling — no fluff.