CAREERS

See all we have to offer you

We develop technology solutions

Our Vision is to be a leading software solutions company for veriety of industries. We know that Customer’s growth is our growth, so we commit ourselves to help the customers in achieving their business goals. We want to be known as reliable, innovative and top quality software service provider in IT industry.

Our Mission is to enhance business growth of our customers with creative design, development and to deliver market defining high quality solutions that create value and reliable competitive advantage to customers around the globe.

Data Engineer

We are looking for an experienced Data Engineer who can take a proactive role in a self-organized and driven team where the “Innovation by All” revolution is well underway.

If you are eager to work in an agile organization, where re-prioritization happens regularly, and you are willing to drive the agenda and continually find improvements both in code and in processes, HTEC is the right place for you.

Important note: All interviews are held online as we follow the recommendations and the current situation considering coronavirus.

Key Responsibilities:

  • Participate in the design and development of Big Data applications
  • Create ETL processes and frameworks for analytics, data management, and data warehousing
  • Implement large-scale near real-time streaming data processing pipelines
  • Scaling and optimizing data systems for batch and streaming operations
  • Implement and manage the data life cycle stages

Requirements:

  • Strong coding experience in Scala, Java or Python 
  • Strong knowledge of relational and non-relational databases
  • Experience with REST/SOAP APIs
  • Experience with most common AWS services (EC2, S3, …)
  • In-depth knowledge of the Hadoop ecosystem (HDFS, Spark, HIVE)
  • Knowledge of Unix-like operating systems (shell, ssh, grep, awk)
  • Experience with Github-based development processes and scrum methodology
  • Fluent English is a must

An ideal candidate would also have:

  • Experience with streaming technologies (Kafka, Spark Streaming)
  • Experience with job scheduling tools (Jenkins, Apache Airflow)
  • Experience with building data pipelines on AWS, Azure, or Google Cloud
  • Experience with data modeling, data architecture, and data versioning
  • Experience with BI platforms and solutions (Tableau, Qlik, …)
  • Experience with Machine Learning would be a huge plus

Software Engineer

We are looking for an experienced Data Engineer who can take a proactive role in a self-organized and driven team where the “Innovation by All” revolution is well underway.

If you are eager to work in an agile organization, where re-prioritization happens regularly, and you are willing to drive the agenda and continually find improvements both in code and in processes, HTEC is the right place for you.

Important note: All interviews are held online as we follow the recommendations and the current situation considering coronavirus.

Key Responsibilities:

  • Participate in the design and development of Big Data applications
  • Create ETL processes and frameworks for analytics, data management, and data warehousing
  • Implement large-scale near real-time streaming data processing pipelines
  • Scaling and optimizing data systems for batch and streaming operations
  • Implement and manage the data life cycle stages

Requirements:

  • Strong coding experience in Scala, Java or Python 
  • Strong knowledge of relational and non-relational databases
  • Experience with REST/SOAP APIs
  • Experience with most common AWS services (EC2, S3, …)
  • In-depth knowledge of the Hadoop ecosystem (HDFS, Spark, HIVE)
  • Knowledge of Unix-like operating systems (shell, ssh, grep, awk)
  • Experience with Github-based development processes and scrum methodology
  • Fluent English is a must

An ideal candidate would also have:

  • Experience with streaming technologies (Kafka, Spark Streaming)
  • Experience with job scheduling tools (Jenkins, Apache Airflow)
  • Experience with building data pipelines on AWS, Azure, or Google Cloud
  • Experience with data modeling, data architecture, and data versioning
  • Experience with BI platforms and solutions (Tableau, Qlik, …)
  • Experience with Machine Learning would be a huge plus