The Palantir Pack: Startups founded by Palantir alumni – Protocol

A group of founders and funders who worked at the secretive data-analysis startup share a playbook: Hire the best engineers, go after audacious ideas and do the unscalable things needed.
When Luba Lesiva created the Palantir Alumni syndicate on AngelList, she thought it would be a small project to help a handful of LPs invest in companies started by her former co-workers. By her mental count, the former head of Palantir’s investor relations could name at most a dozen, maybe two dozen alumni-led companies. Her thesis was the group would do a deal or two a year.
“I was massively, massively, massively wrong,” said Lesiva, the founder of what’s now called Palumni VC. “We’re at over 700 LPs in the syndicate right now. It’s not a small side gig anymore.”
The investor interest is proportional to the number of startups spinning off of Palantir, which is in the range of 100 or more. Lesiva tracks 170 that are either founded or led by ex-Palantir executives. The irony is that Palantir’s own software, which helps companies and the government find patterns within data, could probably better identify the links.

The Palantir group hasn’t reached the fame of the PayPal Mafia, whose infamous Fortune photo shoot and rapidly successful next acts made it legendary. Instead, much like the company itself, the group that I’m calling the Palantir Pack has emerged more quietly, taking years to focus on building their companies and firms.
The Palantir Pack includes early team members like co-founder Joe Lonsdale, first employee Alex Moore and 10th hire Garry Tan. All three have been founders and are now funders to a network of alumni. Other cohorts of founders, like much of the leadership team who started defense startup Anduril, bonded in their Palantir days. But unlike PayPal, which saw its early employees leave after eBay bought the company and immediately start new ventures, Palantir is still spinning out new founders and arguably hasn’t reached its full potential. There are new waves of up-and-coming endeavors that could add to the Pack’s unicorn ranks, like AI startup Arena, Web3 developer tool Kurtosis and data science company Hex — all of which have announced funding this year.
“The Pack has gradients of maturity, and there’s going to be a lot more,” predicts Trae’ Stephens, an early employee at Palantir who would go on to become a partner at Founders Fund and co-founder and executive chairman at Anduril.
The Palantir playbook started by going after big, ambitious, mission-driven problems and hiring the best talent out of universities to do it, the founders, early employees and VCs interviewed for this story said. At Palantir, they built an engineering-first culture that wasn’t afraid to work hard and do unscalable things like deploying engineers out to customers to make it a success. Now the Palantir Pack is deploying the same playbook as it builds the next generation of startups.
“Those folks tend to be very entrepreneurial, they’re comfortable operating on their own,” Lesiva said. “So there’s an outsized amount of Palantirians that end up wanting to build a startup, and that’s how you end up with over 170 startups in the ecosystem.”
If there’s one thing that PayPal and Palantir share, beyond co-founder Peter Thiel, it’s that they are both startups that faced an enormous amount of challenges in their early days.

PayPal packed a lot of drama into its early years, from the merger between Elon Musk’s and Thiel’s competing startups to executive churn including Musk’s ouster to a difficult IPO and the eventual buyout by eBay. Palantir, founded in 2003, saw a slower burn: hard technical challenges and the difficulty of selling into the government right at the beginning of the Iraq War.
“There was a lot of pressure on margins, there was a lot of pressure on how to structure things so that we could actually have a scalable business with positive unit economics. It was not easy,” Anduril’s Stephens said. “I think everyone has that memory of fighting through it not being easy for a long time.”
It took 17 years for the company to go public, and it faced a lot of criticism, even internally, in that time for its helping companies and government agencies process and analyze data, like its controversial contract with ICE. There are still lingering questions over whether Palantir sees too much.

But working through that adversity was an essential ingredient in bonding the network together, Palantir alumni say.

“It’s really hard for companies like Google to have these sorts of mafias because everything is just too good. It’s like crazy profit margins or the growth profile is wild. Everyone gets incredible comp. So you don’t build that sort of resilience that’s required to go and become really high-caliber founders,” Stephens said.
Palantir was relentless in pursuing the top talent in Silicon Valley when it started. That’s the first step in the Palantir playbook: Hire the best and brightest talent, typically from universities, and then get them to bring in their friends.
“It’s really hard for companies like Google to have these sorts of mafias because everything is just too good.”
One of Palantir’s co-founders took an advanced Natural Language Processing class at Stanford after he had graduated in order to recruit the best talent on campus, recounted Moore, Palantir’s first hire who is now a partner at 8VC after starting a few other companies. The early Palantir team also used its own network-analyzing product in combination with Facebook’s friend-connection data to pin down potential recruits’ social links.

Another time, an eBay employee let the team secretly set up shop in a conference room and interview people to poach them from inside the eBay office, said Lonsdale, who is also founder of multibillion-dollar startups like Addepar and the venture firm 8VC. “It’s a level of aggression to do whatever you can to get the top people to bring them over,” he said. “The culture of the company was who are your smartest friends, who are their smartest friends, and whatever they’re doing is not as important as what we’re doing.”
It wasn’t just a matter of getting the brightest technical talent in the door that set them up to become a new wave of startups after their tenure at the company. The company is unique in using forward-deployed engineers, or FDEs, to basically replace a sales team and embed with customers to solve their problems on-site. It was a strategy in part to help with talent, Lonsdale said: The more engineers were exposed to customers’ problems, the more likely they would feel responsible for their work and bring in their friends to help solve them.
Deploying engineers in what it called an 80-20 model — where Palantir’s software could solve the first 80% of a customer’s needs and then the FDEs had to make tweaks and revisions to the other 20% on-site — also meant that Palantir employees had a lot of ownership of the product.
“It’s rare to have that up-front perspective inside an enterprise to the real-world problems,” said Pratap Ranade, whose startup Kimono was acquired by Palantir in 2016. He left in 2017 and worked at another company before starting his latest venture in AI, Arena. “That was a powerful lesson that we carried over, which was that mindset of going physically to customers, spending time with the customer and their people and really understanding it,” Ranade said. “Even though to some degree, you’re doing things that don’t scale.”

The do-whatever-it-takes mentality is something that’s deeply ingrained in the Palantir culture because the company is so mission-driven and focused, said early employee Matt Grimm. During his tenure, he ended up deploying to Iraq and Afghanistan to implement Palantir’s tech.
Amplitude IPO day on Nasdaq Amplitude, co-founded by Palantir alum Jeffrey Wang (front in gray blazer), was valued at $5 billion shortly after its market debut in 2021.Photo: Nasdaq
“When you as an employee go through that, you build this sort of connective fiber, this in-the-trenches-with-your-colleagues kind of mentality that in the long run leads to a very tight and close-knit community. And I think Anduril’s no clearer example than that,” said Grimm, who co-founded the company and is now COO. “There’s a reason that in our C-suite, four of us all started at Palantir within six months of each other and all had that exact same early experience and that exact same kind of struggle and then worked through it.”
The same intensity in the culture that helped bond the Pack together is also what many alums say has helped them become deeply obsessed with the problems they’re solving and being willing to tackle ambitious ideas.
Some are building on top of Palantir’s software through its new Foundry for Builders program, which is giving startups access to its Foundry software to build a company on top of it. While Palantir doesn’t invest as a company in its former employees like some companies have done, the first class of Foundry for Builders was specifically for its alumni companies, ranging from Medicare adviser Chapter to legal-tech software Hence to defense communication software Adyton.
“I notice that many Palantir alums have gone on to found companies where transforming an entire industry is the actual mission of their company,” said Palantir’s Meredith McNaughton, head of the Foundry for Builders program. “They have an ambitious use case in mind where getting the data foundation right is central to the success of their company.”
Building on top of Palantir’s products isn’t a prerequisite of the Palantir Pack, though. The one thing linking them is that there’s a story behind why the founders cared about starting them beyond just wanting to start a company, says Stephens.

“It’s rare to have that up-front perspective inside an enterprise to the real-world problems.”
“One of the most pathetic versions of Silicon Valley is what I would call whiteboard founders. There are people that are like, ‘I want to start a company because it’s the fastest path to making a ton of money, so I’m gonna stand in front of a whiteboard, I’m gonna write every idea I can come up with, and then pick the least bad one,’” Stephens said. “I don’t think Palantir alumni companies start that way.”
Often their work at Palantir informed their next company. Peregrine Technologies’ Nick Noone worked on law enforcement during his tenure at Palantir before starting a company focused on using data for public safety. The co-founders of Blend had worked on commercial lending projects at Palantir before starting their digital mortgage provider, which went public last year at a nearly $4 billion valuation. Mosaic’s co-founders all worked together on Palantir’s finance team before leaving to build better software for CFOs, recently raising $25 million from Founders Fund. Vontive’s Shreyas Vijaykumar spent seven years at Palantir and worked on a partnership with Freddie Mac, where he met his co-founder. The pair raised $135 million from Palantir-linked firms like Goldcrest Capital, 8VC and XYZ Venture Capital and emerged from stealth earlier this year with a data-focused approach to investment real estate mortgages.
Health care, an emerging focus in Palantir’s business, has seen a similar rise in startup interest from alumni. There’s companies like Little Otter focused on pediatric mental health and Kranus Health, which is working on digital erectile dysfunction therapy. In Australia, Michael Winlo is working on Emyria, a drug development biotech focused on psychedelics.
There’s even a handful of Web3 companies like unicorn OpenSea, which was co-founded by Alex Atallah, and consumer companies like Partiful, described as Eventbrite but for Gen Z.
A common link running through the network is the funders who support them. Much like the PayPal Mafia, an essential ingredient to the emergence of the Palantir Pack is having well-connected alumni who continue to back companies.

Lesiva’s Palumni VC group is the most-targeted venture capital arm, but Palantir alumni are also a force within venture capital. Lonsdale co-founded 8VC and has several ex-Palantir alums like Moore as partners. Stephens works with Thiel at Founders Fund. Tan, the 10th employee, later started Initialized Capital. Others like Accel’s Steve Loughlin and XYZ’s Ross Fubini were advisers to Palantir. Goldcrest’s Adam Ross, who was on Palantir’s board, is also a key investing link between many of the companies. When Ranade raised money for Arena earlier this year, he ended up with investment from Founders Fund, Initialized and Goldcrest — not even realizing that all three had Palantir ties.
The one part of Palantir’s culture and playbook that many alumni have purposefully chosen not to follow is Palantir’s penchant for secrecy. For Ranade, that means being transparent about the org structure and decisions inside the company. For Anduril, it’s meant explaining from the beginning what its technology can and can’t do, and getting multiple people from the company out there as public faces.
“There’s just no reason it needed to be as controversial as it was. And I think that that’s what we’re trying to work actively against at Anduril. It was like we’re just going to tell people exactly what we’re doing,” said Stephens.
The power of the Palantir network is that it can still continue and is creating more companies. In the summer 2022 Y Combinator batch, there are at least four startups that are from Palantir alumni: Ilumadata, Moonshot, Medplum and Windmill.
Many employees join Palantir and do a tour of duty before taking some time off and starting their next thing, says 8VC’s Moore, who is still on the board of Palantir. The intensity of the company and its culture makes it feel like the engineering equivalent of wanting to join SEAL Team Six. “Instead of going to grad school, they go to Palantir,” Moore said. “They do their tour of duty and they make the company a little bigger, better, but it’s not a short-term optimization where you’re making a fortune. It’s not like crypto last year. There’s no tricks to it. It’s just hard work.”
But just like pressure can turn carbon into diamonds, it can turn engineers into founders and Palantir alumni into the next generation of startup founders. Palantir used to be a footnote to the PayPal story, but through their distinct experiences, unique culture and refined playbook, its offspring have built out their own network in the valley. One extensive enough that it might take Palantir software and a forward-deployed engineer to analyze its true extent and scale.

Biz Carson ( @bizcarson) is a San Francisco-based reporter at Protocol, covering Silicon Valley with a focus on startups and venture capital. Previously, she reported for Forbes and was co-editor of Forbes Next Billion-Dollar Startups list. Before that, she worked for Business Insider, Gigaom, and Wired and started her career as a newspaper designer for Gannett.
Don’t know what to do this weekend? We’ve got you covered.
Our recommendations for your weekend.
Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety’s first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.
What better way to spend the weekend than by listening to Mark Zuckerberg and Joe Rogan talk for three hours? Once you’re done, check out “Lost Ollie” with the kids and test your Netflix knowledge with Heads Up!
Think of Joe Rogan what you will, but when Zuckerberg sits down with the podcaster to share some exclusive news (Project Cambria is coming in October) as well as his thoughts on Meta’s hardware strategy, the emergence of VR fitness (“It happened way sooner than I thought”) and the future of visual computing and brain-computer interfaces, you kind of have to tune in. Just be warned: The whole conversation is almost three hours long!
The story of lost or discarded toys trying to find their way back to their owners is a tale as old as time, and there have been what feels like a dozen “Toy Story” movies dealing with the same subject. Still, Netflix’s new limited series “Lost Ollie” stands out from the crowd with its own take on growing up, the fleeting nature of childhood memories and the types of adventures only children and the young at heart can undertake. A great four-parter to watch with your little ones this weekend.
The charades game Heads Up has been a hit on iOS and Android for some time. Now Netflix has licensed the title as part of its growing mobile games initiative. But instead of replacing the existing version, the video service simply released a Netflix-specific version with tons of charades prompts related to shows like “Stranger Things,” “Bridgerton” and “Squid Game,” as well as categories like “Strong Black Lead,” “Netflix Family” and “True Crime.” It’s a fun game to play with all the TV and streaming nerds in your life. A Netflix subscription is required.
Microsoft wants to acquire Activision Blizzard for $68.7 billion. Take-Two has spent $12.7 billion to acquire Zynga. Sony has paid $3.6 billion for Bungie. All together, the video game industry has seen 651 transactions totaling $107 billion during the first half of this year alone. Will this trend continue, what is it driven by and what does it mean for game developers, players and the industry at large? In this deep dive, The Ringer explores the age of the gaming mega mergers, and it’s well worth a read.
A version of this story also appeared in today’s Entertainment newsletter; subscribe here.
Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety’s first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.
If you thought the rise of remote work, independent contractors and contingent workers rose sharply during the pandemic, just wait until the next few months when you see a higher uptick in the on-demand talent economy.
Rising workload and pace, the stress of commuting and a taste of the flexible work-from-anywhere lifestyle have all contributed to what many are calling the Great Resignation, which is only just the beginning of the headwinds organizations are facing, says Tim Sanders, vice president of client strategy at Upwork, a marketplace that connects businesses with independent professionals and agencies around the globe.
“It began with front-line workers, but it’s not going to end there,” Sanders notes, “Recent data suggests that the biggest industries for quits are now software and professional services and on top of that, I predict that we’ll see more leaders and managers continuing to quit their jobs.”
As the economy leans toward a recession, and layoffs across dozens of tech firms make headlines, Sanders predicts companies will increasingly turn to on-demand talent. “These highly skilled independent contractors and professionals offer the speed, flexibility and agility companies are seeking right now. Leaders are becoming more empowered to fully embrace a hybrid workforce and shift away from rigid models.”

Leaning into headwinds: Driving growth amid uncertainty
A recent report from Upwork, The Adaptive Enterprise, underscores the importance of flexible on-demand talent during uncertain times. Sanders notes: “A growing number of organizations, including Upwork and customers like Microsoft, Airbnb and Nasdaq understand that on-demand talent enables companies to reduce risk, drive cost savings, and at the same time, protect their people from burnout. Flexible workforce models also allow businesses to respond to and recover faster from crises than more traditional models.”
Some crises come in the form of economic slowdowns, while others can take the shape of geopolitical conflicts that disrupt life and work as we know it. Mitigating risk — such as a pandemic wave striking a certain region housing the majority of a company’s staff — is one reason businesses turn to on-demand talent, but it’s certainly not the only one.
CEOs surveyed by Deloitte in 2022 see talent shortages as the biggest threat to their growth plans. The survey goes on to report that CEOs believe that talent is the top disruptor to their supply chain and there is more to be gained within their workforce by providing greater flexibility (83% in agreement) as opposed to merely offering more financial-related incentives. What is top of mind for many business leaders is needing to fill talent and skills gaps, so they can deliver new products and enhanced services. In other words, companies are struggling to find the specific skill sets needed to advance their business objectives and innovation agendas.
The biggest benefit of leveraging on-demand talent is often tapping into the talent and skills that businesses can’t find elsewhere. Upwork’s recent report highlights that 53% of on-demand talent provide skills that are in short supply for many companies, including IT, marketing, computer programming and business consulting.

By harnessing a global talent pool from digital marketplaces like Upwork, businesses have wider access to skilled talent who can accelerate what those companies offer to customers at a fraction of the cost. “Skillsourcing” on-demand talent helps companies maintain a more compact population of full-time employees to concentrate on work that only they can do as well as maximize their strengths while bringing in independent professionals to handle the rest.
Behind the growth: Speed, flexibility and agility
Speed, flexibility and agility are three critical benefits offered by on-demand talent to businesses seeking competitive advantages in their sector. While on-demand talent solutions give companies speed-to-market advantages, Sanders sees that they also give organizations a strategic form of flexibility.
“An agile organization is able to make bold and quick moves without breaking everything,” Sanders says, “and look at a number of our Fortune 100 customers that have a workforce made up of almost half on-demand talent, and how they can pivot on a dime. It’s a case of structure enabling strategy.”
As for speed and efficiency doing the actual work, Sanders says clients report that when hiring managers have been given access to on-demand talent, they engage the needed talent within days instead of months, and when they bring them onto projects, the work is completed up to 50% faster than through traditional avenues.
Sanders says, “Businesses have realized that remote work experiences are best led and judged by outcomes, not just time in the office, and more leaders are comfortable and confident opting for a hybrid workforce that can deliver based on those outcomes.”
Upwork’s Labor Market Trends and Insights page shows that organizations are indeed ramping up their hybrid workforces: 60% of businesses surveyed said they plan to use more on-demand talent in the next two years.
“The old way of acquiring talent isn’t efficient,” Sanders says. “Staffing firms aren’t the silver-bullet solution they once were, and more businesses need to rethink and redesign their workforce with on-demand talent as the economy and work rapidly evolve. The conversation is no longer about the future of work, but the future of winning.”

Sommer Panage, Slack’s senior accessibility manager, talks about her goals since joining the company in April and how she hopes to build a more accessible product.
“’How could someone else experience this?’ is the number one question we ask.”
Sarah (Sarahroach_) writes for Source Code at Protocol. She’s based in Boston and can be reached at sroach@protocol.com
Before Sommer Panage joined Slack, there was no centralized team working on accessibility.
Panage said there were some people who focused on desktop accessibility and others who worked on Slack for mobile, but they were scattered across the company. Panage joined Slack a few months ago as senior engineering manager and helped bring the company’s accessibility efforts under one roof. Before joining, she worked on accessibility efforts on iOS at Apple and held roles at Twitter before that.
Slack recently announced updates to improve keyboard navigation and introduced a new interface for screen readers as well as what the company called “an ongoing effort to bridge gaps.” Panage said bringing together one unified accessibility team has helped Slack focus on these different areas of improvement and work with teams across the company to build new features with accessibility in mind. But she stressed that the work is ongoing.
“Accessibility is never done,” she told Protocol. “A common challenge for companies is to say, ‘Oh, we made our product accessible. And now it’s done.’ But it’s not the case.”

This interview has been edited for clarity and brevity.
How is Slack’s approach to the topic different from others?
In large companies, in the Apples or the Microsofts and the big companies of the world, there’s definitely an accessibility team. But I think it’s much less common in the small companies, and often there will be people who care deeply about it, and they might be scattered. They might start a networked effort across the company. I’ve seen that in various places as well, but it’s not necessarily the standard for companies to have an accessibility team, a centralized hub of accessibility. That’s one thing that Slack recognized pretty early as it started to grow … It’s not super common, but it is super beneficial.
Can you point me to a time, either at Slack or a previous position, when you had an idea that didn’t work out in the way you expected? And on the flip side, what was a change you made that had an immediate impact?
Accessibility is a field that, especially when I started in it over 10 years ago, there was not a lot of information. There were standards online that I could read about there, but there was not much else. So I made a lot of mistakes early in my career. A common one I think folks will make as developers is to overlabel things or be overly verbose when you’re thinking about screen-reader experience. So that was a mistake I made in many ways multiple times … We started getting this feedback from our screen-reader users saying, “Oh, hey, this is way too verbose. This is not helpful to me.” That was where I learned two lessons. One, verbosity is incredibly important for screen-reader users. Two, listening to our users is vital to making good decisions about the product, and certainly that’s something that Slack was already doing before I arrived.
“[L]istening to our users is vital to making good decisions about the product.”

As far as things that have gone really well, sometimes a very small idea can be a really big thing. One of the changes that we recently made in our updates at Slack was to add a couple preferences that allow users more fine-grained control over how their messages are read out. And it sounds so simple, right? It’s like, “Oh, you read the date first or the date last.” It’s the little preference, but this can be so important for someone using a screen reader because listening takes time. If the information I want is up front, you’ve just made me so much more efficient.
How did you decide to focus on these areas of improvement?
At Slack, we focus heavily on what our users tell us and the experiences that they’re having. So this work stems from a large amount of time and feedback and process with both external user feedback that comes in through our various feedback systems as well as user groups who are full-time assistive technology users. By combining feedback from these two spots, we’ve found key pain points within the Slack product that we knew we wanted to really focus on.
And those were really focused around the notion of keyboard navigation and keyboard focus. We had a lot of feedback from our screen-reader users. And so we wanted to make sure we put a lot of work there to make sure the desktop product was fantastic for them.
Since joining Slack, what have been your top goals in terms of accessibility?
One of the things I’ve really wanted to focus on is thinking about how Slack can really take a stance in accessibility and build the product to be something that says, “This is how Slack should work from an accessibility standpoint. And this is how we believe — with the feedback of our users and with what we’ve learned in our research — this is what we want to create.”
The other thing is thinking about each platform individually. Just because there’s a cohesive picture for accessibility doesn’t mean it’s going to behave exactly the same on each platform. It might need to be different. Android is different from iOS, which is different from web, etc. And so a second-tier focus there is then thinking about, “OK, so now we’ve agreed on how we want to approach it. What does that look like on Android? What does that look like on a web product?”

And then certainly as well, looking for any broken windows, any things where we’re like, “Hey, this needs to be better.” So one thing that you may have noticed if you happen to use our Android product is significant improvements in our larger text size support.
What do you mean by your goal that you want to “take a stance” on accessibility?
We’re thinking about Slack really as the digital headquarters right now. This is a place you go to get work done. Part of that stance is making sure that Slack is a place for everyone to come and to get their work done. And it’s really about Slack being this digital headquarters that is equitable, that is delightful to us and that is efficient for all of our users.
And the other part of taking a stance on accessibility is about how we do accessibility. Not just that our product will be equitable, but also how do we actually approach making that happen? And the approach part of it is really strongly based, for us, in user-centered design and user-centered engineering. From both perspectives — from the design perspective and from the perspective of which we build the products — we want to be sure that we’re drawing from our users and understanding what they’re experiencing and what they experience in other products and on different platforms.
What does the process look like for introducing an accessibility improvement at Slack to implementing it?
We’ll prototype an idea and we’ll get something working, something functional, say, for example, a screen-reader feature. And so we’ll get something prototyped and ready to share with our internal user groups who are full-time assistive technology users. And by doing that we can get that early information as to whether or not this idea was good or not … and there has to be a really big willingness to be wrong, because sometimes we don’t get it right.

From there, we’ll have a prototype, we’ll iterate with our internal user group and start to hone in on what the product is going to be. And then that will develop into a feature brief and something that becomes part of our road map for the accessibility team. From there, it’s going to go through a pretty standard process of planning and execution all the way through. But through that planning and execution, we’ll continue to iterate with our user group, so it won’t just go into a box and come out the other side, but rather at various milestones, we will go back with them and say, “Hey, can you try this out? Give us feedback, take a pass at it.”
Would you say someone with an accessibility background should be in the room during every conversation about different Slack changes?
To some degree, yes. Obviously having accessibility in the room, in every meeting — we can’t scale that. But when a feature is in the early phase — a great example would be something like the Huddles feature, where having someone say, “Hey, these are going to need captions” really early on — that’s a really fantastic example of what happens when someone with an accessibility mindset is in that room really early.
What was your perspective as a user before joining Slack versus your experience since joining?
One of the things that drew me to Slack was noticing the progress they were making on accessibility. I’m always tinkering. I will make the text size way big, I will invert the colors on my phone, I will do all kinds of things just to see what happens, to learn about the product. And I was consistently noticing that Slack was making improvements. My friends who have visual impairments and my friends who are hard of hearing had made comments about some things they noticed and been impressed with the product. And so early on, I knew that Slack was a company that was really putting some great effort into accessibility, and that drew me there.

“[T]here has to be a really big willingness to be wrong, because sometimes we don’t get it right.
Since coming in and joining, that perspective hasn’t really changed. Now I just know the people who are doing this work. But since joining Slack, I noticed that the work was coming from a lot of different places. And so that was kind of what pulled me in. I thought, “Oh, it’d be so great if this were an accessibility team all under one roof working together,” because I think you can be more efficient that way. You can reach out through the company more successfully when you’re coming from a centralized place. And it also helps people know who to go to when they have a question. So that was the shift I wanted to help Slack achieve by joining.
How has your background in psychology helped you in your role?
It’s one of those degrees I did not anticipate I would utilize, and I found it very helpful in my career in technology and then specifically in accessibility. In studying psychology and doing a bit of psychology research through my undergrad, one of the things that I had to learn was a lot about thinking about how people think. And that particular skill … all of that became very, very useful then when I came to work in accessibility. “How could someone else experience this?” is the number one question we ask.
One of your goals at Slack is to identify “broken windows.” Are there any that you want to focus on in the coming year or so?
One that I’m very excited about is just seeing us improve our Android product and how it has its text sizes. I think that one of the challenges that the recent changes are trying to solve is around the fact that Slack is a web product on desktop. And so I think that because of that, I wouldn’t necessarily say there’s a lot of broken windows around it, but it creates challenges because it’s a web product as a desktop app.

For a company that’s scaling, how can you keep accessibility in mind?
It’s a really big challenge. No question there. As a company grows, one thing that is important to establish pretty early on is what the accessibility process looks like … It’s really important for something like accessibility in the same way we would process for security, right?
As a company is growing and adding teams, it’s important to have a way that says, “This is how Slack does accessibility.” So as a new team spins up, that process is already there for them. They just need to look into it. They don’t have to reinvent the wheel as to what these things mean. The process and the “how” is already there. In Slack’s case, that means the design reviews and the accessibility review toward the end, and the office hours.
Sarah (Sarahroach_) writes for Source Code at Protocol. She’s based in Boston and can be reached at sroach@protocol.com
Snapchat relied on microservices and a multicloud strategy to overhaul its technology approach as it grew.
Jerry Hunter, senior vice president of engineering at Snap, told Protocol about its infrastructure.
Donna Goodison (@dgoodison) is Protocol’s senior reporter focusing on enterprise infrastructure technology, from the ‘Big 3’ cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.
In 2017, 95% of Snap’s infrastructure was running on Google App Engine. Then came the Annihilate FSN project.
Snap, which launched in 2011, was built on GAE — FSN (Feelin-So-Nice) was the name for the original back-end system — and the majority of Snapchat’s core functionality was running within a monolithic application on it. While the architecture initially was effective, Snap started encountering issues when it became too big for GAE to handle, according to Jerry Hunter, senior vice president of engineering at Snap, where he runs Snapchat, Spectacles and Bitmoji as well as all back-end or cloud-based infrastructure services.
“Google App Engine wasn’t really designed to support really big implementations,” Hunter, who joined the company in late 2016 from AWS, told Protocol. “We would find bugs or scaling challenges when we were in our high-scale periods like New Year’s Eve. We would really work hard with Google to make sure that we were scaling it up appropriately, and sometimes it just would hit issues that they had not seen before, because we were scaling beyond what they had seen other customers use.”

Today, less than 1.5% of Snap’s infrastructure sits on GAE, a serverless platform for developing and hosting web applications, after the company broke apart its back end into microservices backed by other services inside of Google Cloud Platform (GCP) and added AWS as its second cloud computing provider. Snap now picks and chooses which workloads to place on AWS or GCP under its multicloud model, playing the competitive edge between them.
The Annihilate FSN project came with the recognition that microservices would provide a lot more reliability and control, especially from a cost and performance perspective.
“[We] basically tried to make the services be as narrow as possible and then backed by a cloud service or multiple cloud services, depending on what the service we were providing was,” Hunter said.
Snapchat now has 347 million daily active users who send billions of short videos, send photos called Snaps or use its augmented-reality Lenses.
Its new architecture has resulted in a 65% reduction in compute costs, and Hunter said he has come to deeply understand the importance of having competitors in Snap’s supply chain.
“I just believe that providers work better when they’ve got real competition,” said Hunter, who left AWS as a vice president of infrastructure. “You just get better … pricing, better features, better service. We’re cloud-native, and we intend on staying that way, and it’s a big expense for us. We save a lot of money by having two clouds.”
The Annihilate FSN process wasn’t without at least one failed hypothesis. Hunter mistakenly thought that Snap could write its applications on one layer and that layer would use the cloud provider that best fit a workload. That proved to be way too hard, he said.
“The clouds are different enough in most of their services and changing rapidly enough that it would have taken a giant team to build something like that,” he said. “And neither of the cloud providers were interested at all in us doing that, which makes sense.”

Instead, Hunter said, there are three types of services that he looks at from the cloud.
“There’s one which is cloud-agnostic,” he said. “It’s pretty much the same, regardless of where you go, like blob storage or [content-delivery networks] or raw compute on EC2 or GCP. There’s a little bit of tuning if you’re doing raw compute but, by and large, those services are all pretty much equal. Then there’s sort of mixed things where it’s mostly the same, but it really takes some engineering work to modify a service to run on one provider versus the other. And then there’s things that are very cloud-specific, where … only one cloud offers it and the other doesn’t. We have to do this process of understanding where we’re going to spend our engineering resources to make our services work on whichever cloud that it is.”
Snap’s current architecture also has resulted in reduced latency for Snapchatters.
In its early days, Snap had its back-end monolith hosted in a single region in the middle of the United States — Oklahoma — which impacted performance and the ability for users to communicate instantly. If two people living a mile apart in Sydney, Australia, were sending Snaps to each other, for example, the video would have to traverse Australia’s terrestrial network and an undersea cable to the United States, be deposited in a server in Oklahoma and then backtrack to Australia.
“If you and I are in a conversation with each other, and it’s taking seconds or half a minute for that to happen, you’re out of the conversation,” Hunter said. “You might come back to it later, but you’ve missed that opportunity to communicate with a friend. Alternatively, if I have just the messaging stack sitting inside of the data center in Sydney … now you’re traversing two miles of terrestrial cable to a data center that’s practically right next to you, and the entire transaction is so much faster.”
If I want to experiment and move something to Sydney or Singapore or Tokyo, I can just do it.

Snap wanted to regionalize its services where it made sense. The only way to do that was by using microservices and understanding which services were useful to have close to the customer and which ones weren’t, Hunter said.
“Customers benefit by having data centers be physically closer to them because performance is better,” he said. “CDNs can cover a lot of the broadcast content, but when doing one-on-one communications with people — people send Snaps and Snap videos — those are big chunks of data to move through the network.”
That ability to switch regions is one of the benefits of using cloud providers, Hunter said.
“If I want to experiment and move something to Sydney or Singapore or Tokyo, I can just do it,” he said. “I’m just going to call them up and say, ‘OK, we’re going to put our messaging stack in Tokyo,’ and the systems are all there, and we try it. If it turns out it doesn’t actually make a difference, we turn that service off and move it to a cheaper location.”
Snap has built more than 100 services for very specific functions, including Delta Force.
In 2016, any time a user opened the Snapchat app, it would download or redownload everything, including stories that a user had already looked at but hadn’t yet timed out in the app.
“It was … a naive deployment of just ‘download everything so that you don’t miss anything,’” Hunter said. “Delta Force goes and looks at the client … finds out all the things that you’ve already downloaded and are still on your phone, and then only downloads the things that are net-new.”
This approach had other benefits.
“Of course, that turns out to make the app faster,” Hunter said. “It also costs us way less, so we reduced our costs enormously by implementing that single service.”
Snap uses open-source software to create its infrastructure, including Kubernetes for service development, Spinnaker for its application team to deploy software, Spark for data processing and memcached/KeyDB for caching. “We have a process for looking at open source and making sure we’re comfortable that it’s safe and that it’s not something that we wouldn’t want to deploy in our infrastructure,” Hunter said.

Snap also uses Envoy, an edge and service proxy and universal data plane designed for large, microservice service-mesh architectures.
“I actually feel like … the way of the future is using a service mesh on top of your cloud to basically deploy all your security protocols and make sure that you’ve got the right logins and that people aren’t getting access to it that shouldn’t,” Hunter said. “I’m happy with the Envoy implementations giving us a great way of managing load when we’re moving between clouds.”
Hunter prefers using primitives or simple services from AWS and Google Cloud rather than managed services. A Snap philosophy that serves it well is the ability to move very fast, Hunter said.
“I don’t expect my engineers to come back with perfectly efficient systems when we’re launching a new feature that has a service as a back end,” he said, noting many of his team members previously worked for Google or Amazon. “Do what you have to do to get it out there, let’s move fast. Be smart, but don’t spend a lot of time tuning and optimizing. If that service doesn’t take off, and it doesn’t get a lot of use, then leave it the way it is. If that service takes off, and we start to get a lot of use on it, then let’s go back and start to tune it.”
Our total compute cost is so large that little bits of tuning can have really large amounts of cost savings for us.
It’s through that tuning process of understanding how a service operates where cycles of cloud usage can be reduced and result in instant cost savings, according to Hunter.
“Our total compute cost is so large that little bits of tuning can have really large amounts of cost savings for us,” he said. “If you’re not making the sort of constant changes that we are, I think it’s fine to use the managed services that Google or Amazon provide. But if you’re in a world where we’re constantly making changes — like daily changes, multiple-times-a-day changes — I think you want to have that technical expertise in house so that you can just really be on top of things.”

Three factors figure into Snap’s ability to reap cost savings: the competition between AWS and Google Cloud, Snap’s ability to tweeze out costs as a result of its own work and going back to the cloud providers and looking at their new products and services.
“We’re in a state of doing those three things all the time, and between those three, [we save] many tens of millions of dollars,” Hunter said.
Snap holds a “cost camp” every year where it asks its engineers to find all the places where costs possibly could be reduced.
“We take that list and prioritize that list, and then I cut people loose to go and work on those things,” he said. “On an annual basis depending on the year, it’s many tens of millions dollars of cost savings.”
Snap has considered adding a third cloud provider, and it could still happen some day, although the process is pretty challenging, according to Hunter.
“It’s a big lift to move into another cloud, because you’ve got those three layers,” he said. “The agnostic stuff is pretty straightforward, but then once you get to mixed and cloud-specific, you’ve got to go hire engineers that are good at that cloud, or you’ve got to go train your team up on … the nuances of that cloud.”
Enterprises considering adding another cloud provider need to make sure they have the engineering staff to pull it off: 20 to 30 dedicated cloud people as a starting point, Hunter said.
“It’s not cheap, and second, that team has to be pretty sophisticated and technical,” he said. “If you don’t have a big deployment, it’s probably not worth it. I think about a lot of the customers I used to serve when I was in AWS, and the vast majority of them, their implementations … were serving their company’s internal stuff, and it wasn’t gigantic. If you’re in that boat, it’s probably not worth the extra work that it takes to do multicloud.”

Donna Goodison (@dgoodison) is Protocol’s senior reporter focusing on enterprise infrastructure technology, from the ‘Big 3’ cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.
Tech companies that rely on cloud computing and want to reduce their carbon emissions should take a long look at a new report.
A new report has revealed the most climate-friendly regions in which to operate data centers.
Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).
A new report has revealed the most climate-friendly regions in which to operate data centers. The findings point to the challenges holding the sector back from reducing carbon emissions, as well as ways tech companies can mitigate the climate toll of their cloud computing demands.
The report, released Thursday by cloud management platform Cirrus Nexus, analyzed the energy consumed over the course of a week in regions of the U.S. and Europe where major cloud service providers tend to concentrate their data centers. It then estimated each region’s carbon intensity, a metric of the amount of carbon dioxide emitted per unit of electricity generated (in this case, grams per kilowatt hour).
Chris Noble, CEO and co-founder of Cirrus Nexus, said the report emerged out of a desire to recommend the regions with the least carbon-intensive data centers. However, Noble said, there’s “not a simple answer.” While regions that rely the most on solar, wind, hydro and nuclear power tend to have the lowest carbon intensity, that measure fluctuates dramatically due to renewables’ intermittency when the sun isn’t shining or the wind isn’t blowing.

In the U.S., Midwestern data centers were consistently among the most carbon-intensive due to the grid’s heavy reliance on coal and methane gas. Texas, in comparison, relies on both wind and gas. That leaves it a cut above the Midwest but worse off than the Northwest, where hydropower plays a major role in electricity generation.
In Europe, data centers located in Sweden and France — both of which rely largely on nuclear, though Sweden has abundant hydro resources as well — had the lowest carbon intensities. The countries also avoided the peaks and valleys in carbon intensity common across countries like Italy and Germany, which have solar infrastructure but rely on fossil alternatives when the sun is not shining.
Ireland offered a particularly stark example of the swings in carbon intensity that come with renewables. The country started the week Cirrus Nexus analyzed with a carbon intensity in the middle of the European pack. But when the wind slackened mid-week, it became the dirtiest region in Europe. Once the wind picked up again, though, Ireland rocketed to third-cleanest and even generated excess power, which it exported to the U.K.
The report emphasized the importance of increasing energy storage. Doing so would allow the grid — and the cloud computing infrastructure that relies on it — to smooth out the inconsistency of renewables without relying on fossil fuels in the absence of sun or wind.
Noble said it would behoove companies to factor fluctuating carbon intensities into where they locate their operations, if minimizing their climate toll is deemed a corporate priority: “Companies should also focus on optimizing their operations in order to reduce total emissions, not just use carbon credits to offset,” he added.
However, Noble said companies that buy cloud computing services historically have had a blindspot for the emissions tied to data center operations, and factors like cost and proximity to a company’s main operations generally outweigh carbon intensity when selecting a cloud computing provider.

Complicating matters is the fact that the regions with the lowest carbon intensity also tend to offer the most expensive cloud computing services. And the report points out that if demand for clean computing increases, it could actually drive up prices even more in the short to medium term, at least until more carbon-free generation capacity comes online.
Tech companies with cloud computing workloads generally look to cloud management platforms to oversee both their systems and how much they spend on them. Cirrus Nexus advises companies to design their applications so that their workloads can be moved between data centers to keep costs down as they fluctuate overtime; according to Noble, an increasing number of the company’s clients have asked about managing carbon as well.
Ultimately, Noble said the carbon intensity of cloud operations is a function of what customers demand. If they suddenly tell cloud computing providers that they will go somewhere else unless the provider minimizes its carbon intensity, Noble said there could be a rush to bolster data centers with solar panels or storage.
But that all starts with companies actually factoring carbon intensity into their decision of where to go to get their cloud computing needs met.
Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).
To give you the best possible experience, this site uses cookies. If you continue browsing. you accept our use of cookies. You can review our privacy policy to find out more about the cookies we use.

source

Leave a Comment