Data Engineer at AVAST Software
Prague, CZ

AVAST Software (www.avast.com), maker of the world's most popular antivirus, protects over 400 million computers and mobile devices with our security applications. In business for 30 years, AVAST is one of the oldest companies in the computer security business, with a portfolio covering everything from free antivirus for PC, Mac, and Android, to premium suites and services for business – offered in about 40 languages.

Headquartered in Prague, Czech Republic, with offices in the U.S., Europe, and Asia, AVAST Software employs some of the brightest new talent in the IT industry, from around 30 different nations.

We are a team of a few engineers developing and using a Big Data Analytical platform, based on the Hadoop technology and we are looking now for Data Engineer to join our team based in Prague

The platform is used by about a hundred of people all over the company, for whom we provide the platform by pioneering new tools, tuning performance and ensuring stability.

We also do data science projects on our own, sifting through data no one ever dared to sift through before.

A typical task for you may be to cooperate with data scientists on developing and deploying a cloud solution to a business problem, or taking some open source Big Data technology and prepare a PoC in it, or find out why our use of existing technology is suboptimal and sub performant.

 

We are looking for someone who:

  • Wants to try out cutting-edge big data technologies that just didn't exist a few years ago
  • Thinks that a few petabytes of data and thousands of processors are cool to play with
  • Likes to sleep soundly because the code is well tested and monitored, and deployment automated
  • Is curious about how can Machine Learning be used in business
  • Doesn't think she or he knows everything already, but is able and willing to learn virtually anything

We are not requiring deep expertize in any of the areas, but if you patch Hadoop for breakfast, that is a big plus for sure.

Some example opportunities for professional growth:

  • Learn about Distributed System programming from pioneers of Functional Programming in Scala.
  • Learn about Data Science from expert Machine Learning scientists, both academically and practically.
  • Learn about maintaining and monitoring high-throughput high-performance solutions from the Operations team.
     

Requirements:

  • Some experience with Java/Scala server-side programming -- you don't need to have written tens of backends, but you should be able to read and modify the code
  • General Computer Science knowledge -- algorithms & data structures, discrete mathematics, computer architecture, networks, etc.
  • Notion of what BigData algorithms such as MapReduce are -- if you think "Why don't you just load it to a database, normalize the schema and do joins", you should probably stop reading
  • Responsible aproach to development -- JMX metrics and alerts, integration tests, containers
  • Notion of what Data Science / Machine Learning is -- you will co-operate intimately with researchers

What do we use (and would be a plus if you do too):

  • Java/Scala
  • Kafka, Hadoop, HDFS, Kudu, Elasticsearch
  • Linux & Bash
  • Spark (both Python & Scala), Hive/Impala, Jupyter, sklearn, spark-ml, tensorflow, keras


We offer:

 

  • An interesting job in an international team of a growing and very successful company 
  • An exciting product portfolio
  • Opportunities for professional growth
  • The dynamic international work environment
  • An adequate performance-based salary
  • Great benefits: food and drink provided all day by the company, game room, music studio, fitness center, golf simulator, library, language lessons and much more
  • Flexible working hours, home office
  • Cafeteria benefit system, multisport card
  • 25 days of holiday, 5 sick days
  • The chance to join a major global tech company