For the most part, as a technical writer, I document what others design. I explain APIs, describe CLI options, and write how-tos that help developers complete their tasks. But I rarely get the opportunity to see the work that goes into designing the tools and applications that our users use. How exactly was the API built? What did you use to create all of these client libraries? How did you generate the OAS file? How are you deploying the VMs?
I’ve always wanted answers to these questions, so I decided to find them myself. I’m embarking on a new project (and blog series) I’m calling Beyond the Docs, where I step into a developer’s shoes and learn how to build the tools I’ve been documenting.
I’m starting by building SpecGate, a CLI tool that acts as a quality gate for OpenAPI specs. I would consider it to be the next evolution of SmartDoc. After that, the plan is to:
- Build a new API with the Django REST Framework
- Use SpecGate to validate the spec in a GitHub workflow
- Document the API with Redocly
- Generate an SDK from the OAS file
- Document the SDK
Each step builds on the last one, and I’ll be writing about it all here!
About SpecGate
The first stop on this journey is SpecGate, a Go-based CLI tool I built to validate OpenAPI specs before they reach production. I chose Go because it’s become the industry standard for building CLIs, and it compiles into a single binary.
Building SpecGate wasn’t easy because I knew nothing about Go, and it’s very different than Python. I kind of hacked my way through it, but honestly, that’s how I learn.
Starting out
Before I wrote a single line of Go, I bought a book and spent some time getting familiar with the syntax, poking around with basic features, and started building. I didn’t have a detailed plan, but I wasn’t starting from scratch either since I’d built SmartDoc a few months ago. The logic would be largely the same, but unlike SmartDoc, SpecGate doesn’t use an LLM for its analysis. SpecGate is a deterministic CLI.
To build SpecGate, I used cobra-cli, a very popular framework for building CLIs with Go. Getting started was straightforward: install cobra-cli, run cobra-cli init, add a command with cobra-cli add, and then I had a working skeleton for the project.
In this blog post, I’ll talk about the commands that I built and some considerations and limitations I thought about when building the CLI.
Designing commands
specgate check
The first command I designed is specgate check. It’s the core of what SpecGate does: validating an OAS file, flagging violations, and returning a non-zero code if the spec isn’t ready.
When you define a new command with cobra-cli, you start by defining the usage and providing a short and long message that are shown when someone uses the help command.
var checkCmd = &cobra.Command{
Use: "check <spec>",
Example: `specgate check oas.json`,
Short: "Check an OpenAPI spec for readiness",
Long: `check evaluates an OpenAPI specification against readiness rules.
If errors are detected, the command exits with a non-zero
status code, allowing it to be used as a quality gate in CI.`,
}
After that, you add logic in the Run() function. This logic executes when someone runs specgate check:
Run: func(cmd *cobra.Command, args []string) { }
For example, if I wanted the command to print “Hello World” to the terminal, it would look like this:
Run: func(cmd *cobra.Command, args []string) {
fmt.Println("Hello World!")
}
Inside of the Run() function, I defined one argument: the OAS file name. So if anyone wants to run specgate check, you must pass a file name as the argument. Without one, the command errors out.
Then I used the kin-openapi package to parse the OAS file and ensure the structure is valid:
Run: func(cmd *cobra.Command, args []string) {
file := args[0] # Defining the argument here
loader := openapi3.NewLoader()
doc, err := loader.LoadFromFile(file)
From there, several helper functions I created examine several components of the OAS file:
- Operation summaries, descriptions, and tags
- Descriptions for success and error responses
- The
serversobject For example, theCheckOperationfunction looks at each operation in a given path and checks if there are any missing operation summaries, operation IDs, operation descriptions, or tags:
func CheckOperation(op *openapi3.Operation, path string, result *CheckResult) {
if strings.TrimSpace(op.Summary) == "" {
result.OperationSummaryViolations = append(result.OperationSummaryViolations, path)
}
if strings.TrimSpace(op.OperationID) == "" {
result.OperationIdViolations = append(result.OperationIdViolations, path)
}
if len(op.Tags) == 0 {
result.OperationTagViolations = append(result.OperationTagViolations, path)
}
if strings.TrimSpace(op.Description) == "" {
result.OperationDescriptionViolations = append(result.OperationDescriptionViolations, path)
}
If any of these components are missing, it’s flagged as a violation.
specgate advise
I wanted to include some LLM functionality in this CLI, similar to SmartDoc. Rather than use an LLM to analyze the OAS file, I decided to use the LLM to provide some suggestions for improving descriptions.
To do this, I created a helper function, SuggestFromReport, that creates a new OpenAI client and sends a prompt to the GPT-5-mini model:
func SuggestFromReport(reportJSON []byte, specBytes []byte) (string, error) {
client, err := newOpenAIClient()
if err != nil {
return "", err
}
ctx := context.Background()
prompt := fmt.Sprintf()
}
It accepts the report in JSON format, and sends the JSON to the LLM. The prompt instructs the LLM to parse through the JSON and provide suggestions for missing descriptions.
The purpose of this command is to give teams a starting point for missing descriptions. The LLM generates starter content and then writers or developers can iterate on the conent.
specgate rules
The next command I created was specgate rules. The purpose of this command is to display all of the rules SpecGate uses for its analysis. I imagine that if a dev team ever uses this tool, it would be nice to be able to see which rules drive the CLI’s functionality.
specgate init
The last command that I created was specgate init. Towards the end of developing for v1, I decided that I wanted to add the ability to customize the CLI in some way. For example, it would be nice to be able to configure the severity for certain rules. Perhaps some teams don’t see missing operation description as a warning, but an error. Users could configure a YAML file to set the severity for each rule.
Running specgate init creates a new configuration file. If users forget to run the command before they run their first check, specgate check creates one for them.
I wasn’t able to get the severity configuration done for 1.0.0, but I was able to enable users to define the URLs that SpecGate should flag as violations in the servers object:
config:
server_block_list:
- https://www.example.com
- https://localhost
Hopefully I can figure out how to add severity customization in a later release.
Final thoughts
I learned a lot building SpecGate. My last CLI, SmartDoc, used an LLM for its analysis. This meant that I could offload the hard work to the model. With SpecGate, I had to define every rule myself, think through every edge case, and make every decision about what counts as an error and what counts as a warning. That was a lot harder than expected.
I also had to think about the user experience in a different way. When building the
What does a developer need to see? Where might they get stuck? What’s the happy path? These are questions I ask as a technical writer, but building SpecGate meant that I was using code. This is exactly the kind of thing Beyond the Docs is about.
Next I’ll be working on building a Django API and putting SpecGate to work!