Responsibilities include capacity planning, performance improvement, and automation / tools development.
The ideal candidate is expected to meet tight project deadlines, excel under pressure, work well with others, be self-motivated, and be able to manage short and long term projects.
You will also be expected to help optimize and fine tune the environment and systems for best "uptime" and performance.
You will actively interface with engineers, software developers, network engineers, systems, storage, and database administrators on projects and provide on-
call support. You should be able to identify, troubleshoot and resolve issues quickly and effectively, sometimes under pressure.
Excellent communication skills and teamwork is a must!
Must have :
4+ years experience in Systems Engineering / DevOps role is required
Solid experience with distributed systems and exposure to breadth of Big Data technology such as Hadoop MapReduce, YARN, Hive, HBase, Redis
Proficiency with JVM internals as related to performance tuning and memory management
Experience in creating software tools to automate production systems with either of the following languages : python, golang, ruby, java etc.
Good understanding of networking and related protocols, with strong fundamentals
Proven experience troubleshooting problems and working with a team to resolve large scale production issues
Good understanding of configuration management, monitoring and systems tools (ie : Puppet, Nagios, Cacti, Graphite, Logstash, etc).
Good understanding of Mysql
Nice to have :
Experience in Ad-Technology space
Experience working with cloud-based (AWS, Google cloud) or hybrid environments (OpenStack, Docker)
Proficiency with source control, continuous integration and testing methods (git, jenkins)
Familiarity with NoSQL technologies eg : mongodb, cassandra, scylladb etc