Query Spark SQL clusters via Thrift/HiveServer2. Works with Spark, EMR, Hive, Impala.
io.github.aidancorrell/spark-sql-mcp-server
https://github.com/aidancorrell/spark-sql-mcp-server
STDIO
1 required env var
Hosted endpoint — paste into any MCP client.
Configuration this server reads at startup.
Hostname of the Spark Thrift Server
Port of the Spark Thrift Server (default: 10000)
Default database to use
Authentication method: NONE, LDAP, KERBEROS, CUSTOM, or NOSASL
Username for LDAP authentication
Password for LDAP authentication
Kerberos service name (default: hive)
Where to find authoritative docs and source for Spark SQL.
Open MCP Agent Studio and connect this server to Claude, GPT, Gemini, DeepSeek and more — no install required.
Open Agent Studio