1.
AWS Data Engineer
Experience Required: 3 to 5 Years
Location: Ahmedabad, India
Key Responsibilities:
Design, develop, and maintain scalable data pipelines using AWS services.
Leverage Python and PySpark for data processing and transformation.
Work with AWS Glue for ETL processes, ensuring efficient data movement and transformation.
Collaborate with cross-functional teams to understand data requirements and implement solutions.
Optimize and manage data storage solutions on AWS.
Monitor and troubleshoot data pipelines to ensure smooth operations.
Integrate with AWS SageMaker for machine learning models (good to have).
Key Skills:
Python programming for data manipulation and automation.
Experience with PySpark for big data processing.
Expertise in AWS Glue for ETL development.
Familiarity with other AWS services such as S3, Lambda, Athena, and Redshift.
Knowledge of SageMaker and basic understanding of machine learning concepts (good to have).
Strong problem-solving skills and ability to work independently in a hybrid environment.
Nice to Have:
Experience with Machine Learning (ML) and AI in data solutions.
Exposure to other AWS services used in data analytics.
Qualifications:
Bachelor’s degree in Computer Science, Information Technology, or a related field.
Proven experience in AWS ecosystem with a focus on data engineering tools and services.
2.
AI/ML Engineer
Experience Required: 4 to 7 Years
Location: Ahmedabad, India
Key Responsibilities:
Designing, developing, testing, and deploying machine learning models for various applications
Collaborating with data scientists, software engineers, and product managers to develop data-driven features
Optimizing and improving the performance of existing machine learning models
Implementing and maintaining scalable machine learning pipelines and infrastructure
Analyzing and preprocessing large datasets to extract valuable insights and features
Staying updated with the latest developments in machine learning, deep learning, and related technologies
Conducting model training, validation, and performance evaluation to ensure models meet the required accuracy and reliability
Creating and maintaining documentation related to machine learning models, algorithms, and processes
Developing A/B testing frameworks and managing the deployment of models in production.
Qualifications:
Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics, or a related field
6+ years of experience in AI and Machine learning
Proven experience as a Machine Learning Engineer, Data Scientist, or in a similar role
Strong programming skills in Python, R, or similar languages
Proficiency with machine learning libraries and frameworks such as TensorFlow, PyTorch, scikit learn, etc and experience with NLP libraries (e.g., NLTK, spaCy, Hugging Face Transformers)
Experience with data preprocessing, data wrangling, and data visualization
Hands-on experience with SQL databases and API integration
Experience with text generation techniques, including language models like GPT
Hands-on experience with cloud platforms (AWS, GCP, or Azure) and deploying models in production
Solid understanding of machine learning algorithms, deep learning architectures, and statistical methods
Experience with version control systems (e.g., Git) and continuous integration/continuous deployment (CI/CD) pipelines
Ability to work in a collaborative environment and communicate effectively with cross-functional teams.
Nice to Have:
Knowledge of natural language processing (NLP) and its applications
Experience with MLOps tools and best practices for scalable model deployment and monitoring
Familiarity with data privacy and security regulations
Experience with real-time data processing and streaming technologies
Experience with reinforcement learning, generative models, or unsupervised learning techniques.