Read all disclaimers at beave.rs/disclaimer
Many AppSec teams, particularly those with limited resources, focus on detection and remediation. However, there is an often overlooked area: services. AppSec teams can expedite development, mitigate risks, and standardize security across an organization through common services.
Problems with Software Security
The Explosion of Applications
Development paradigms have shifted, and the number of applications has exploded. The rise of microservices means that a single user experience could be powered by hundreds or thousands of distinct codebases, all with unique security concerns. Organizations are also much more willing to develop tools internally, and zero-trust architecture means that many internal tools are public.
At the same time, these applications may not be securely written. Unmaintained legacy codebases are pervasive and often mission-critical. Code inherited from acquisitions may not adhere to SDLC expectations. Even if a codebase is supposedly written following SDLC standards and best practices, there is an assumption that engineers followed them throughout and did not cut corners. All of the standards, processes, and best practices go out the window when an engineer works late at night just before a deadline - it is just a matter of human nature.
Silos Create "Artisanal" Solutions
Leadership teams often talk about "breaking down silos" to increase productivity. Executives constantly discuss silos because simplification and unification are effective ways to increase productivity, efficiency, and security. Phil Venables (CISO of Google Cloud) recently published a blog post where he discusses a shift from "artisanal" approaches to "industrial" scale. I like this terminology, and I highly recommend that you read his post on the subject.
A unified technology stack and extensive shared code are, on paper, amazing ideas. However, legacy and productivity implications make this difficult to apply. There may be mission-critical code written in COBOL on an IBM mainframe. Some teams may need specific libraries for domain-specific tasks (such as AI work in Python). Other groups may have different preexisting stacks or prefer working in separate stacks. While standardization is ideal, business priorities could make it infeasible.
Security as a Productivity Inhibitor
While cybersecurity is essential for any business, it is also a productivity inhibitor (at least in the short term). Employees may be unable to use their preferred software or cloud service because IT has not approved it. Similarly, software engineers likely must adhere to extensive (from their perspective - onerous) policies and procedures to produce well-written and secure software. Even if people understand the importance of quality and security, they may become frustrated by the guidelines (on a late night before a deadline, for example).
This reality can create an adversarial relationship between security teams and the rest of the company. The relationship deteriorates even more since cybersecurity is difficult for people to see value from. In a perfect world, people would never have to think about cybersecurity. It generally only becomes the topic of conversation when something has already gone wrong. The general public does not think about all the attacks that hypothetically could have happened, but controls and policies prevented them.
Aggressive Abstraction
Applications usually have two categories of code with different purposes. First is the domain-specific logic. Your domain is why the application exists: typical data schemas and functions needed to extract useful information from said schemas. Second is supporting code: everything that is not directly contributing to project objectives but is required for success (API servers, authentication, data sanitization, etc.).
I led major cultural and technical reforms on my FIRST Robotics Team to improve our control systems' quality, reliability, and efficiency. One of the principles that we employed was "aggressive abstraction." We had a "5x" approximation, which meant that for every hour spent on domain-specific design, we would expect to spend 5 hours on minimum viable code and 25 hours refining it. Are those precise numbers? No, but the theory is there. The primary driver of this development time, and most of our reliability issues, was the supporting code needed to enable our domain-specific logic.
As part of the reforms, we shifted development work to better-written abstractions with greater time commitments. While it caused development times to increase in the short term, it led to faster development overall. Speed is one thing, but the real benefit was in quality and reliability. We developed tremendous confidence that the abstractions would work as expected. After our extensive refinement, if there was an issue at runtime, we knew they could not be the source.
When you look at security vulnerabilities with APIs, much of the risk comes from these supporting components. All of the OWASP Top 10 are likely to occur in the supporting code rather than in domain-specific code. If every application has a bespoke authentication mechanism, there are a lot of opportunities for vulnerabilities and many systems that must be regularly penetration tested. The inflection point for AppSec teams is when they can provide trusted, reliable shared services across application teams: enabling resources to be spent on penetration test depth and freeing backend engineers to work on domain logic.
Practical Integration
I have spoken about changing user behavior by dividing good security practices into smaller steps. The principle is that if you make an individual effort easy to achieve, it is easier to break inertia. Each step is small enough that it is difficult to resist individually, but in aggregate (leveraging time to your advantage), users can experience meaningful and perpetual behavioral changes.
The same principle can apply to development teams. It could be challenging to convince teams with preexisting codebases to perform significant restructuring and refactoring to end up with fewer visible benefits. However, if features can be rolled out over time, especially if they exist in parallel with existing approaches, you will likely see much greater adoption.
While teams have various stacks, there is a unique opportunity to leverage API proxies gradually. Proxies are particularly powerful since APIs have a small number of standardized formats across technology stacks (REST, SOAP, GraphQL, and gRPC, among others). Since they are also independent of application code, legacy applications can benefit from some proxy-based security features without changing, restructuring, or refactoring significant parts of the codebase.
Shifting the Trust Boundary
Security needs to keep up with shifts in development paradigms. "Zero trust" was a fantastic theory that devolved into little more than throwing an SSO portal in front of enterprise applications. Microservice architectures mean the number of services within a trust boundary is much larger than in the monolithic ASP.NET days. Yes, removing VPNs has productivity and cost benefits, but the current "Zero Trust" implementation is dishonest and far from ideal.
Having a truly trustless system is impossible. However, tangible benefits can come from reducing trust boundaries from the enterprise or application level to the service level. If a microservice is compromised, there is a real risk that other microservices could be compromised. Since services are all networked and reside within the trust boundary, lateral movement often goes unchecked. However, API proxies mean that security teams can insert metadata for more systematic auditing of data flow. Proxies make these benefits invisible to backend engineers since tracking can occur at the exit and entry points.
With a proxy-based approach, microservice applications could reduce their trust boundary to the service rather than the application. Authentication and authorization checks could take place with every service call. Data validation could also be more scalable with less developer intervention. You could even insert "invisible" tracking to audit the flow of function calls within an application and mitigate unconventional pathways (an indicator of unusual behavior or compromise) with little per-application requirements.
Value for Backend Engineers
Many organizations waste significant time with initialization, especially on new projects. The result is lost productivity and questionable security. AppSec teams will then test with SAST and DAST tests that hopefully catch some bugs. However, some vulnerabilities, especially more advanced ones, will be missed. Thus, some critical systems may undergo pentests, while others will be left alone.
Short-term API proxies enable greater security with minimal disruption. The ROI for expensive pentests would be MUCH higher since results propagate to all participating projects. Long-term, new projects could focus development work on the domain-specific logic without much of the current overhead.
The Broader Shift to Domain-Specific Logic
Software teams are naturally trending towards focusing on domain-specific logic and outsourcing the rest. They outsource to both FOSS libraries and snippet sources. This tendency is good because it leads to drastic improvements in developer productivity and means organizations can concentrate resources on highly skilled developers.
At the same time, there are some real downsides. Remember that I speak as an outsider, but development teams tend to focus on short-term goals. "Working" code is prioritized over "good" or "trusted" code. This focus is understandable; software moves quickly. The problem is that software has become the confluence of many short-term decisions, and the long-term needs to be addressed.
GitHub Co-Pilot's value is building the wrappers needed to support domain-specific logic. It works by taking incomprehensible numbers of code samples and using them to approximate a user's intention. We see the same thing with StackOverflow. Functions and snippets from other users get copied over, and nobody knows the security implications of them (AI or human-created).
Developers often adopt the third-party solution because it is easy. Writing a good authentication layer is not fun. Developers generally prefer to work on domain-specific logic. The key to high-quality and secure software is to make the most secure path easy. If a developer can deploy a solid AAA system with a single flag change, they will do that. I want to see more in-house libraries so engineers rely on something other than StackOverflow. However, for the sake of security teams, API proxies are a good and largely platform-agnostic way to achieve desirable results.
The Inflection Point
The inflection point for Application Security and Software Engineering teams is where AppSec teams provide high-quality and secure services to the engineers. At this point, security shifts from a productivity inhibitor to a productivity booster. Penetration tests go from a tradeoff of depth or breadth to a combination of both. API Proxies mean integrating existing codebases can be easy, while new codebases can focus resources on domain logic.
Building such an ecosystem will take a meaningful investment, and there will always be room for growth or modification. However, if an organization invests, there could be tremendous returns on said investment in the long term. Security teams are uniquely positioned to address these challenges because we are much more conscious of the importance of code quality and have less exposure to the short-term pressures that dominate application development.
A focus on services is the most straightforward path to shift from artisanal to industrial security, and is one that security teams should strongly consider.