Discover our Tech Team
Get to know what our tech team has to offer
At travel audience we have one of the most dynamic and engaged tech teams; our engineers, analysts, and scientists tackle complex challenges while taking responsibility and ownership from day one. Our tech leadership boasts an average of over 10 years of experience and has notable achievements across industries in diverse companies varying from early startups to established enterprises.
Our tech and data teams are at the core of our company as they are responsible for driving innovation and managing our proprietary technology and databases. The engineering teams work closely together and lean on one another on a daily basis, also partaking in knowledge sharing activities. The team works with one of the most modern and diverse tech stacks in the industry and led by senior tech leadership with over 10 years in the space.
Our current challenges include scaling up our technology to meet the needs of our clients and those of the ever-evolving travel industry. We believe in acceleration through agility; we empower our tech team to follow agile principles and embrace a customer-oriented and responsive working style.
Big data & Real time
Our data team deals with tens of thousands of events per second and terabytes of data per day. Therefore, there are always new ways to leverage our data and new challenges to be solved requiring an agile approach to architecture and the desire to embrace new technologies whether it’s open-source or cloud-based services.
Our backend crew who handles high throughput, low latency, hyper resilient software for realtime bidding and ad delivery
Our frontend and backend engineers developing the advanced graphical tooling for administrating ad campaigns
Our SRE team that is operating our Cloud Environment, Applications, GitOps CI/CD pipelines - using modern, state-of-the-art technologies
IT Service desk
Our team managing the IT systems, tools, and processes that enable our employees to perform their jobs securely and successfully
Our team developing the data platform for digesting, cleaning, aggregating and storing terabytes of data per day
Our dedicated data scientist crew developing above state-of-the-art machine learning solutions to optimize the audience targeted by ad campaigns, the budget and the resource allocation
Data Analytics and Reporting
Our team dedicated to extract insights from our BI Platform and enable decision-making for all our customers and internal stakeholders
Tech blogMore resources
At travel audience, we provide integrated data-driven solutions for travel advertising. Since we deal with terabytes of data per day, the selection of the right tools for data workloads is essential for us. Being on the Google Cloud Platform(GCP), we heavily rely on their most prominent technology — BigQuery. Though we use various other GCP components in our data teams, if I have to pick one component to recommend, it will be BigQuery without doubts. BigQuery is fast, powerful, and in most cases cost-effective.
This article from Google provides a good overview of BigQuery for a data warehouse practitioner. However, organizing data is not covered in much detail. This blog is focusing solely on this part — how to organize data in BigQuery for effective and compliant management across multiple teams in your organization. Each organization is different and Convey’s law is definitely applicable in data modeling. Still, we think this could be a starting point for anyone to extend on.
When we started with BigQuery, we were focusing mostly on “how to get work done” and not much on effective data management. But pretty soon, we ran into the following problems.
Bayesian modelling for predicting winning probabilities of bids in ad auctions
In our previous post on bid optimisation, we concluded with a cliched cliffhanger. Better with something Bayesian, until next time. Click bait from an era before clicks. It was not all just wishful thinking though. Back then we were already working on a Bayesian win price prediction model and having now put it into production, we are in a position to share why we strongly believe it is worth adopting a Bayesian approach for win price prediction in ad auctions.
Generally speaking, Bayesian approaches refer to updating prior beliefs, expressed as some distributions, based on observed evidence to infer current beliefs, expressed as some posterior distributions. Kind of an incremental model update, you’d say? Yes, but the key word here was not updating, it was distributions. Namely, when performing Bayesian win price prediction, one does not predict a single number, or a point estimate, based on the input features, but a probability distribution for the prediction.
Ad auction bidding strategy for several competing performance metrics as a constrained optimisation problem
Here at travel audience we are in the business of adtech, i.e. programmatic online advertising. As a Demand Side Platform (DSP), our algorithms find the optimal audience to target in order to bring value to our clients - the advertisers. Targeting the identified audience with ads proceeds via participating in an online auction, which is triggered every time a user visits a website run by a publisher who wants to monetize the visits. All DSPs participating in the auction submit their bids and the highest bid wins the right to show an ad from a client to the website visitor.
Now, this of course is an extreme simplification of the process every DSP executes in less than a 100 milliseconds for each of the tens of thousands bid requests they receive every second. Does the bid request for given user fit the targeting criteria of the clients? Which ad from which campaign of which advertiser should one pick for this bid request? Which provides most value to the client and how to predict this value? What is the right bid given the expected value to the advertiser and our expected margin?
If you’ve been following our other blog posts at tech.travelaudience.com, you’ll know that we’re focused on running all our apps in Kubernetes. You’ll also notice that we’re big into Helm for packaging our k8s manifests. This post will get into the benefits using Kubernetes for ephemeral environments and how Armador makes use of Helm to create them.
When we started running our apps in Kubernetes we used an “umbrella” chart, which listed each of the microservices as dependencies in one Helm chart. The “umbrella” chart worked because it allowed for using just a single command to install all the services into an environment. But as more apps got released into k8s and they demanded their own release cycle, the umbrella chart was no longer scalable. So we broke it apart, and each app was managed with it’s own CD pipeline.
Developers now had an easy way to deploy their app into staging/production, but what we didn’t have was somewhere to test the full system. A key aspect of a microservice architecture is to make sure the individual services work in isolation, but it’s also important to make sure the service works in the full system. Providing developers a way to run a multi-service environment on their own machine proved to be complicated.
Meet the team
Our tech stack
Our hiring process
From sending through your application to the offer, here is our hiring process simply explained so you know what to expect.
We encourage our tech teams to take risks and develop without boundaries because we want our people to push for innovation. Looking for your next challenge? Don’t look any further and apply today!
Can’t find what you’re looking for? You can send your spontaneous application or just get in touch with us at firstname.lastname@example.org