Vamshi Krishna Kyatham

I am a seasoned Full Stack Engineer specializing in engineering scalable, efficient, and intelligent software applications. With a strong foundation in Full Stack Software Engineering and Competitive Programming, I architect high-performance systems that drive business impact. My expertise in agile methodologies and iterative processes fosters collaborative innovation, enabling the delivery of strategic, data-driven solutions that optimize efficiency and scalability.


Experience

Software Engineer

Finta

Technical Skills: JavaScript, Node.js, Express.js, React.js, Redux, Python, Flask, Django, Microservices, Machine Learning, PyTorch, Large Language Models (LLM), Docker, Google Cloud, Firebase, Vector Databases (PineCone DB), LangChain, AI.
  • Architected Aurora, a Fundraising Copilot leveraging an advanced RAG pipeline with LangGraph, implementing multi-stage retrieval optimization with hybrid search techniques and custom-trained language models to provide better responses and highly relevant investor recommendations.
  • Designed and implemented large-scale ETL pipelines for Aurora, a Fundraising Copilot, leveraging Apache Flink, Spark, and LangGraph to optimize multi-stage retrieval with hybrid search techniques and LLM-powered data processing.
  • Engineered an AI-powered Investor Network Recommendation Engine that analyzes email interactions and web-scraped investor data to calculate relationship strength and investment probability. Implemented intelligent matching algorithms, reducing investor search time by 65%.
  • Leveraged LLM-powered tools like APIChain, RequestsTool, and PythonREPLTool for dynamic web scraping, structured data extraction, and real-time information retrieval to refine investor intelligence.
  • Developed the front-end of Aurora using React and Next.js. Architected and deployed scalable backend services with Flask and Node.js. Integrated Elasticsearch, enhancing data retrieval speed by 40%.
  • Built high-performance data pipelines using Apache Kafka, Spark Structured Streaming, and Flink for real-time data ingestion, transformation, and AI-driven analytics.
  • Implemented security protocols, including OAuth 2.0 authentication, and role-based access control.
  • Managed full SDLC phases, from requirement analysis and design to deployment on Google Cloud Run.
May 2024 - Feb 2025

Research & Teaching Assistant

University at Buffalo, The State University of New York

Technical Skills: Java, Spring Framework, Apache Kafka, Spring Batch, Spring Cloud, Spring Security, Python, PyTorch, Flask, Machine Learning, Data Analysis, AI, MongoDB, Hadoop, Spark, Spark Streaming, AWS, Distributed Computing, Reinforcement Learning, React.js, Redux.
  • Devised core services for OneDataShare, including a Single Sign-On feature for social and organizational logins using OAuth 2.0 and OIDC Authentication. Optimized cross-platform data transfer through Bayesian Optimization, DDPG, and PPO by 90% predicting optimized transfer parameters.
  • Teaching and Grading Assistant for CSE 4/587 – Data Intensive Computing and CSE 4/560 – Data Models and Query Language at UB, supporting 250+ students. Focused on advanced database concepts, Hadoop, MapReduce, Apache Spark/Spark Streaming, and machine learning fundamentals to bridge theory with scalable real-world data solutions.
January 2024 - May 2025

Software Development Engineer

Phenom

Technical Skills: Java, Spring Framework, Microservice, Apache Kafka, Flink, NiFi, Redis, Docker, AWS, Kubernetes, CI/CD, Jenkins, ArgoCD, Tekton, MongoDB, React.js, Redux, HTML, CSS.
  • Architected and Engineered Flow Designer and Connector Builder, an alternative to Flow Studio at Phenom, a low/no-code platform with drag-and-drop connectors for high-level flow and sub-flow creation. This tool automates the implementation of process functions for the Extraction, Transformation, and Load (ETL) phases, including Fetching, Splitting, Extracting, and Evaluating from ATS by autogenerating Flink code, Manifest files, and Dockerfile build using Apache Velocity, Apache Ant and ANTLR. Additionally, designed and built a JAR Server to automate JAR file creation for the autogenerated code, which is then used to generate a Docker image that is seamlessly pushed to AWS S3. Automated the deployment of the autogenerated application using Terraform and Tekton, orchestrating CI/CD pipelines to seamlessly deploy applications to Kubernetes (K8S). It internally implements Flink test cases and also conducts comprehensive integration testing during the CI/CD to ensure the reliability and performance of the autogenerated Flink Spring Boot application.
  • Developed an on-demand data migration system using Apache Flink and Apache NiFi for extracting candidates from open jobs and user-specified jobs in Workday ATS, achieving an 88% optimization in the extraction phase. Leveraged Apache NiFi for transformation and load phases, integrating with the Flink backend via Kafka events to handle real-time data streaming. Utilized big data techniques to optimize performance and scalability. Managed both backend development and deployment operations.
  • Utilized PyTorch BigGraph to architect and train an innovative graph embedding model, revolutionizing the prediction of crucial missing features for candidate profiles.
  • Designed fault-tolerant and scalable streaming workflows to handle large volumes of recruitment data while maintaining data consistency and low-latency processing.
  • Worked on fetching the delta candidates for SuccessFactors ATS using Apache Flink Spring Boot application and Apache NiFi.
  • Integrated video assessment workflows into the recruitment pipeline, enabling automated candidate evaluations through self-video assessments.
  • Utilized PyTorch BigGraph to architect and train an innovative graph embedding model, revolutionizing the prediction of crucial missing features for candidate profiles.
April 2022 - August 2023

Software Engineer

Brane Enterprises Private Limited

Technical Skills: Java, Spring Framework, React.js, HTML, CSS, MySQL, MongoDB.
  • Played a crucial role in engineering scalable Spring Boot backend services for survey collection and seamless payment integration, ensuring high availability, security, and optimized performance.
  • Built real-time Grafana dashboards to monitor metrics, and performance, enhancing decision-making efficiency and insights.
  • Enhanced operational efficiency by implementing multi-threading and caching solutions using Redis.
  • Pioneered development in sophisticated libraries and frameworks to integrate Spring Boot with Big Data Technologies, facilitating streamlined applications for data pipeline.
  • Enhanced data streaming algorithms using Dynamic Programming (DBSCAN algorithm) which updates clusters dynamically without recomputation achieving a reduction of time complexity by 50%.
  • Designed and deployed Machine Learning models to analyze market trends, predict customer behavior, and optimize B2B pricing strategies, leveraging data-driven insights to maximize revenue potential. Built real-time analytics dashboards to monitor pricing performance, improving decision-making efficiency.
August 2021 - April 2022

Education

University at Buffalo, The State University of New York

Master of Science (Honors Degree)
Computer Science and Engineering - Research Track

GPA: 4.00

Osmania University

Bachelor of Engineering
Computer Engineering

GPA: 3.82


Skills

Technical Skills, Programming Languages, Frameworks & Tools:
    • Flask Icon
    • Django Icon
    • Kubernetes Icon
    • Kafka Icon
    • Redis Icon
    • Hadoop Icon
    • Spark Icon
    • Flink Icon
    • RabbitMQ Icon
    • AI Icon
    • LLM Icon
    • PyTorch Icon
    • Tensorflow Icon
Workflow
  • Project Planning & Requirements Gathering
  • Design & Prototyping
  • Development, Agile
  • Testing & Quality Assurance

Projects

Healthcare App - gRPC Based

Engineered a healthcare application leveraging gRPC to orchestrate dedicated microservices for patients, doctors, and appointments. This solution streamlines real-time appointment scheduling, enabling patients to easily view available slots and book meetings with doctors in a fast and efficient manner.


Key Innovations & Optimizations:
  • gRPC-Based Communication:

    Utilized gRPC for high-performance, low-latency inter-service communication. This robust framework ensures seamless data exchange between the patient, doctor, and appointment microservices, resulting in swift and reliable service interactions.

  • Modular Microservices Architecture:

    Designed a modular architecture with separate microservices for managing patients, doctors, and appointments. This clear separation of concerns enhances scalability, simplifies maintenance, and improves overall system resilience by allowing independent updates and scaling of each service.

  • Real-Time Appointment Scheduling:

    Developed a dynamic scheduling system that displays available slots to patients in real time. The appointment service handles slot management and booking, ensuring that the scheduling information is always accurate and up-to-date, thereby improving user experience and operational efficiency.

  • Scalable & Resilient Deployment:

    Leveraged containerization with Docker and orchestration via Kubernetes to deploy the system in a cloud environment. This approach supports horizontal scaling, fault tolerance, and rapid recovery, ensuring the application can efficiently handle high volumes of traffic and data.

StreamFlow

Engineered StreamFlow, a real-time data streaming pipeline designed for high-throughput event processing and scalable distributed storage. This cutting-edge system leverages industry-leading technologies to ensure efficient, reliable, and low-latency data handling at scale.


Key Innovations & Optimizations:
  • High-Throughput Data Ingestion:

    StreamFlow utilizes Apache Kafka to ingest vast streams of events with ease. Kafka’s robust partitioning and replication mechanisms guarantee high availability and fault tolerance, ensuring that data flows seamlessly even during peak loads.

  • Real-Time Stream Processing:

    At the core of StreamFlow is Apache Flink, which delivers real-time processing capabilities with exactly-once semantics. Flink efficiently performs data transformations, aggregations, and anomaly detection on over 100K+ movie records, ensuring that insights are generated promptly and accurately.

  • Scalable Distributed Storage:

    Integrating HDFS on Hadoop, StreamFlow offers fault-tolerant, distributed storage. This allows the system to manage large volumes of data reliably while ensuring that storage performance scales in line with processing demands.

  • Containerized Deployment & Microservices Architecture:

    Deployed using Docker and Kubernetes, StreamFlow embraces a microservices architecture that ensures seamless horizontal scaling and optimal resource utilization. This approach not only enhances resilience and scalability in cloud environments but also simplifies maintenance and rapid deployment.

StreamFlow stands out as a robust solution for organizations needing to process and analyze streaming data in real time. Whether it's for dynamic data transformations or real-time anomaly detection, this system is engineered to deliver both speed and reliability, paving the way for informed, data-driven decisions in fast-paced environments.

Reinforcement Learning with Human Feedback (RLHF) Using PPO for Sentiment Analysis

Engineered RLHF-PPO for Sentiment Analysis, a reinforcement learning framework built from scratch, leveraging Proximal Policy Optimization (PPO) with Human Feedback to enhance sentiment classification on the IMDB dataset following Huggingface methodology. This project integrates off-policy learning, adaptive reward modeling, and robust policy gradient techniques to improve AI alignment for sentiment analysis.


Key Innovations & Optimizations:
  • RLHF for Sentiment Understanding:

    Developed a reward model trained on human-labeled sentiment preferences, enabling the policy to align better with nuanced sentiment patterns. This approach refines traditional classification by integrating reinforcement learning for contextual understanding.

  • Custom PPO Implementation for Stability & Performance:

    Implemented PPO from scratch, optimizing the clipped objective function to stabilize policy updates. The actor-critic framework balances exploration and exploitation, ensuring robust sentiment prediction.

  • Off-Policy Learning for Data Efficiency:

    Designed an off-policy training mechanism to learn from past experiences, improving sample efficiency and reducing redundant computations. This enables scalable and cost-effective training.

  • Reward Hacking Prevention & Adaptive Reward Shaping:

    Addressed reward hacking by designing structured reward functions that discourage unintended policy behaviors. Regularization and penalty-based adjustments ensure rewards align with human preferences.

  • Optimized NLP Pipeline for Large-Scale Sentiment Analysis:

    Built an end-to-end pipeline with efficient tokenization, batch processing, and reward-based policy updates, optimized for sentiment variations while maintaining computational efficiency.

  • Fine-Tuned Generalized Advantage Estimation (GAE):

    Implemented GAE to improve advantage function estimation, reducing variance while balancing bias, ensuring training stability, and faster convergence.

Paligemma Multimodal Vision Language Transformer

Engineered PaliGemma, a vision language model leveraging PyTorch, optimized for seamless visual and textual data processing. The model integrates KV caching and Grouped Query Attention (GQA) to enhance computational efficiency, enabling faster inference, reduced memory usage, and improved multimodal understanding.


Key Features & Optimizations:
  • KV Caching for Faster Inference:

    Implemented an optimized Key-Value (KV) caching mechanism to store and reuse attention keys and values, reducing redundant computations and significantly accelerating inference speeds. Particularly beneficial for long-sequence text generation and image-captioning tasks, ensuring minimal latency while maintaining high accuracy.

  • Grouped Query Attention (GQA) for Memory Efficiency:

    Enhanced model efficiency using Grouped Query Attention, which reduces memory bandwidth requirements by sharing keys and values across multiple query heads. This optimization leads to faster token processing without compromising accuracy, making the model suitable for real-time vision-language applications.

  • Seamless Visual-Textual Fusion:

    Designed the multimodal encoder-decoder architecture to efficiently align and process both images and text, ensuring rich contextual understanding. Optimized feature extraction from image embeddings using Contrastive Vision Encoder and text encoder for coherent language generation.

  • Optimized Fine-Tuning:

    Fine-tuned on diverse multimodal datasets to enhance image-text alignment and contextual reasoning. Improved performance across tasks such as image captioning, visual question answering (VQA), text-conditioned image retrieval, and multimodal reasoning. Leveraged parameter-efficient fine-tuning techniques (LoRA, QLoRA) to reduce memory overhead while maintaining high accuracy.

OneDataShare

Actively developing core services, network optimizers, and transfer services for OneDataShare, a platform that optimizes cross-platform data transfers by improving throughput, efficiency, and sustainability. This is achieved through a combination of batch multi-threading, Bayesian Optimization, and Reinforcement Learning (RL) techniques like Deep Deterministic Policy Gradient (DDPG) and Proximal Policy Optimization (PPO).


Key Features & Optimizations:
  • Batch Multi-Threading:

    Implements an adaptive multi-threading model to maximize parallelism and network resource utilization, accelerating transfers across diverse endpoints.

  • Reinforcement Learning for Transfer Optimization:
    • Deep Deterministic Policy Gradient (DDPG):

      Used to dynamically adjust network parameters, such as bandwidth allocation and concurrency levels, to optimize throughput and minimize transfer latency.

    • Proximal Policy Optimization (PPO):

      Helps in real-time policy refinement, ensuring stable and adaptive decision-making for efficient data transfer in dynamic network conditions.

  • Bayesian Optimization for Parameter Tuning:

    Fine-tunes key transfer parameters like batch size, concurrency, and chunk distribution, ensuring optimal performance under varying network conditions.

  • Carbon Emission Reduction:

    RL-based models dynamically adjust energy-intensive operations, optimizing server utilization and workload distribution to minimize energy consumption. Green-Aware Scheduling ensures that data transfers leverage low-carbon intensity time slots, reducing server carbon footprint while maintaining high efficiency.

  • Cross-Platform Integration & Scalability:

    Supports heterogeneous data transfer endpoints, optimizing performance across cloud services, HPC clusters, and distributed storage systems. Implements fault-tolerant mechanisms to ensure high reliability and robustness in large-scale data transfers.

AI-Driven Squat Analysis System

Engineered an AI-driven squat analysis system combining computer vision and NLP to provide real-time, personalized feedback for form correction and injury prevention.


Key Features & Optimizations:
  • State-Based Squat Detection:

    Tracks squat phases with a transition model, ensuring accurate rep counting and form validation.

  • AI-Powered Feedback:

    Fine-tuned TinyLlama with LoRA to deliver context-aware, natural language coaching tailored to the user’s performance.

  • Pose Estimation & Biomechanics:

    MediaPipe Pose extracts joint angles to assess squat depth, knee alignment, and back posture with high precision.

  • Adaptive User Profiling:

    Dynamic thresholds for Beginner vs. Advanced users, ensuring personalized guidance.

  • Optimized Real-Time Processing:

    Frame stabilization, brightness correction, and symmetry detection enhance accuracy across environments.

High-Quality Image Generation Using GANs and Diffusion Models

Engineered a cutting-edge image generation pipeline that integrates a Generative Adversarial Network (GAN) followed by Denoising Diffusion Probabilistic Models (DDPM) and Stable Diffusion models, all implemented in PyTorch. This approach aims to generate high-quality, realistic images by iteratively refining noise inputs using a combination of adversarial training, diffusion-based denoising, and stable latent space conditioning.


Key Features & Optimizations:
  • GAN (Generative Adversarial Network):

    Initially, a GAN architecture is employed where the Generator creates images from random noise and the Discriminator evaluates their realism. Focused on minimizing mode collapse and improving image diversity by incorporating progressive training and advanced loss functions such as Wasserstein loss.

  • Denoising Diffusion Probabilistic Model (DDPM):

    Transitioned to a DDPM implemented from scratch, refining the generated images by iteratively denoising Gaussian noise over several steps, enabling the model to learn complex distributions in the data. Used a stochastic sampling process for generating realistic images with greater details and consistency compared to standard GANs.

  • Stable Diffusion Model:

    Further refined image quality using a Stable Diffusion model based on U-Net architecture, improving the stability and resolution of generated images. Implemented latent space diffusion for faster and more efficient image generation, reducing computational cost while improving quality in high-resolution outputs.

  • Training Optimization & Performance Enhancements:

    Utilized PyTorch Lightning for distributed training, mixed-precision computation, and memory-efficient training. Integrated adaptive learning rates, EMA updates, and perceptual loss functions like LPIPS and SSIM to enhance the generated images' realism and structural coherence. Applied data augmentation and progressive image scaling during training to help the model generalize better.

Language Model (Optimized Llama 2) supporting Q&A for C++ related questions

A highly optimized LLaMA 2 model fine-tuned to efficiently handle C++ related queries, leveraging Grouped Multi-Query Attention (GMQA), an optimized KV Cache implementation, and LoRA fine-tuning. The model delivers fast, accurate responses using open-source data while maintaining a lightweight inference footprint.


Key Features & Optimizations:
  • Grouped Multi-Query Attention (GMQA):

    Enhances efficiency by sharing keys and values across multiple query heads, reducing memory bandwidth requirements and improving inference speed.

  • KV Cache Optimization:

    Implements a custom KV Cache to minimize redundant computations, significantly reducing latency in streaming inference scenarios.

  • LoRA Fine-Tuning:

    Utilizes Low-Rank Adaptation (LoRA) to inject task-specific knowledge into select layers, reducing compute and memory overhead while maintaining high accuracy on C++ queries on top of Meta-trained weights.

  • Data Curation & Benchmarking:

    Fine-tuned on open-source C++ documentation, Stack Overflow discussions, and GitHub repositories, achieving a ~40% latency reduction compared to baseline implementations.

Impact & Applications:
  • Developer Assistants:

    Enhances LLM-powered IDE plugins for real-time C++ debugging and code generation.

  • Educational Tools:

    Provides an interactive learning experience for C++ concepts.

  • Efficient LLM Inference:

    Reduces costs and memory footprint, enabling scalable AI deployments.

This project demonstrates the power of efficient model architectures and optimization techniques in making LLMs faster, lighter, and more practical for real-world applications.

Automated Car Driving using Reinforcement Learning

Supervised the training of multiple autonomous agents within a simulated software environment, focusing on optimizing navigation strategies for a car to travel from Point A to Point B with zero accidents. The project emphasized reinforcement learning and agent coordination to ensure safe, efficient, and seamless vehicle navigation in complex environments.


Key Techniques & Innovations:
  • Autonomous Agent Supervision:

    Guided the training process of multiple agents, ensuring they learned optimal navigation strategies through trial-and-error in a simulated environment. Provided feedback and adjustments to reinforce safe driving behaviors, promoting the learning of collision avoidance, path planning, and traffic awareness.

  • Zero Accident Guarantee:

    Focused on ensuring a smooth, accident-free journey for the vehicle, training the agents to respond to various challenges such as obstacles, traffic signals, and roadblocks without causing accidents. Implemented real-time monitoring to track and prevent collisions, ensuring the agents’ actions adhered to safe driving principles.

  • Reinforcement Learning for Navigation:

    Utilized reinforcement learning algorithms to train the agents with reward-based systems, where rewards were given for safe distance maintenance, smooth turns, and efficient pathfinding. Integrated state-of-the-art algorithms such as Deep Q-Learning (DQN) and Proximal Policy Optimization (PPO) to enhance decision-making and performance.

  • Evaluation & Performance Metrics:

    Evaluated agent performance using key metrics like travel time, collision frequency, and path efficiency, fine-tuning the model based on performance analysis. Ensured that agents could maintain a high success rate for completing the journey with zero accidents in varying test conditions.

Comprehensive Text Chat Application

Developed a comprehensive text chat application with both client and server components, utilizing Socket Programming in both C++ and Java. The project includes a TCP-based chat server supporting multiple clients and a UDP-based peer-to-peer communication model for efficient direct messaging. Additionally, engineered an SMTP-based email sending application from scratch, enabling seamless email communication.


Key Features & Innovations:
  • TCP-Based Chat Server and Client (C++ & Java):

    Developed a chat server that handles multiple concurrent client connections over TCP sockets, built using both C++ and Java for cross-platform compatibility. Clients can send and receive messages, with the server managing message delivery and broadcasting in real-time.

  • UDP-Based Peer-to-Peer Communication (C++ & Java):

    Engineered a UDP-based peer-to-peer communication system for direct client-to-client messaging, ensuring low latency and high efficiency. Implemented dynamic IP discovery and NAT traversal to facilitate communication across different network setups. Developed this model in both C++ and Java, ensuring interoperability between clients running on different platforms and environments.

  • SMTP-Based Email Sending Application (C++ & Java):

    Built an SMTP client that facilitates email sending through specified mail servers, implemented in both C++ and Java. Integrated user authentication, email formatting, and secure transmission supporting text and HTML emails. Included features like attachment handling and SMTP error management to ensure reliable email delivery.

  • Socket Programming in C++ & Java:

    Leveraged C++ and Java socket programming techniques to manage TCP/UDP communication, handling client-server concurrency through multithreading. Implemented low-level network protocols, ensuring robust data transmission and efficient handling of client requests and server responses in both languages.

Image Classification using Deep Learning

Designed and implemented a PyTorch-based Neural Network for image classification, integrating key features from both VGG and ResNet, resulting in a 128-layer deep model. This hybrid architecture leverages VGG’s hierarchical feature extraction and ResNet’s skip connections, ensuring high accuracy, stability, and efficient training for complex image classification tasks.


Key Techniques & Innovations:
  • Hybrid VGG-ResNet Architecture:

    Combined VGG’s deep feature extraction capabilities with ResNet’s residual connections to mitigate vanishing gradient issues and improve training efficiency. Developed a 128-layer network optimized for deep feature learning, balancing computational efficiency with high representational power. Implemented batch normalization, dropout, and swish activation functions to enhance training stability and generalization.

  • Training and Optimization Strategies:

    Applied learning rate annealing, mixed precision training, and advanced augmentation techniques to improve model robustness. Tuned hyperparameters using Bayesian Optimization, achieving optimal weight initialization, activation scaling, and adaptive learning rates.

  • Performance Enhancements & Generalization:

    Achieved high accuracy across diverse datasets, improving cross-domain generalization through fine-tuning and adaptive regularization. Employed gradient checkpointing and dynamic computation graphs to optimize memory usage during training, enabling deeper network deployment. Evaluated performance on benchmark datasets, demonstrating superior classification accuracy compared to traditional single-architecture models.

Online Grocery Ordering Application

Architected a full-stack Online Grocery Ordering Application using Java Spring Boot for the backend and Thymeleaf for the front end, enabling seamless browsing, ordering, and checkout of grocery items. The application ensures efficient inventory management, secure transactions, and a user-friendly interface, enhancing the online shopping experience.


Key Features & Technologies:
  • Backend – Java Spring Boot:

    Built a RESTful API using Spring Boot, handling user authentication, product management, and order processing. Implemented Spring Data JPA for seamless database interactions, ensuring efficient CRUD operations. Designed a scalable microservices architecture, allowing modular expansion for future enhancements. Integrated Spring Security for user authentication and role-based access control (RBAC) to protect sensitive operations.

  • Frontend – Thymeleaf & Bootstrap:

    Developed a responsive UI using Thymeleaf, ensuring dynamic data rendering directly from the backend. Designed an intuitive shopping experience, allowing users to browse products, add items to the cart, and complete secure checkouts. Used Bootstrap & CSS for a clean, user-friendly design, ensuring mobile responsiveness.

  • Core Functionalities:

    User Authentication & Role Management: Implemented Spring Security with JWT, allowing secure login and role-based access.
    Product Catalog & Search: Users can browse, filter, and search for grocery items in real-time.
    Shopping Cart & Order Management: Supports cart persistence, order tracking, and secure checkout with integrated payment processing.
    Admin Dashboard: Enables inventory management, sales tracking, and order fulfillment, ensuring seamless backend operations.

Small Business Network Design using secure web servers

Designed and implemented an Enterprise-Managed Global Wide Area Network (WAN) using Cisco Packet Tracer, leveraging TCP/IP protocols to establish secure and efficient communication between globally distributed branches. The network was optimized for reliability, scalability, and high-performance data transfer, ensuring seamless operations across various locations.


Key Technologies & Networking Protocols:
  • Network Design & Implementation:

    Developed a scalable WAN architecture in Cisco Packet Tracer, simulating real-world enterprise network infrastructure. Configured routers, switches, and endpoints to establish efficient branch-to-branch communication. Implemented TCP/IP stack for error detection, congestion control, and reliable data delivery.

  • Dynamic IP Allocation & Name Resolution:

    Configured Dynamic Host Configuration Protocol (DHCP) to automate IP address allocation, simplifying network administration. Deployed Domain Name System (DNS) for domain-to-IP resolution, ensuring seamless network accessibility and efficient request handling.

  • Reliable Data Transmission Mechanisms:

    Integrated Go-Back-N (GBN) and Selective Repeat (SR) protocols for error recovery and packet retransmission, enhancing network reliability. Ensured low-latency, efficient communication by optimizing packet loss handling and retransmission strategies.

  • Advanced Routing Protocols for Optimized Path Selection:

    Configured Routing Information Protocol (RIP) for distance-vector routing, enabling automated path selection. Deployed Enhanced Interior Gateway Routing Protocol (EIGRP) for faster convergence, adaptive routing, and optimized bandwidth utilization.

  • Security & Performance Enhancements:

    Integrated access control lists (ACLs) and firewall configurations to restrict unauthorized access and protect sensitive data. Implemented encryption and authentication mechanisms, ensuring secure, encrypted communication across branches.

Li-Fi Based Data Transmission System

Revolutionizing wireless communication, this project implements a Li-Fi (Light Fidelity) based data transmission system, utilizing light signals instead of traditional radio waves (Wi-Fi) for ultra-fast and secure communication. Developed using Arduino C++ for hardware control and a Kotlin-based mobile application, the system efficiently converts and transmits text messages via light signals, demonstrating the potential of next-generation wireless communication.


Key Technologies & Implementation:
  • Li-Fi Communication Mechanism:

    Utilized LEDs as transmitters and photodiodes as receivers to establish a high-speed, interference-free data transmission channel. Implemented modulation techniques to encode digital data into light pulses, ensuring accurate and reliable transmission. Achieved low-latency, secure communication, reducing susceptibility to electromagnetic interference and eavesdropping.

  • Hardware Integration with Arduino C++:

    Programmed an Arduino microcontroller to control light signal modulation and demodulation. Designed an efficient signal encoding/decoding algorithm to optimize data transfer rates and accuracy.

  • Mobile Application – Kotlin-Based UI & Text Processing:

    Developed a Kotlin-powered Android application to convert user-input text into light signals for transmission. Implemented real-time decoding algorithms, converting received light signals back into readable text. Designed a user-friendly UI, allowing seamless message encoding, transmission, and reception via Li-Fi.

  • Performance Enhancements & Future Scalability:

    Enhanced data transmission range and speed through optimized LED switching frequencies. Proposed future integration with IoT devices, expanding Li-Fi’s applications in smart homes, healthcare, and industrial automation. Focused on energy efficiency, reducing power consumption compared to traditional wireless communication systems.


Interests

Apart from being a Developer, I enjoy most of my time being outdoors. I love playing Soccer and I am an avid skier and novice ice climber in the winters. During the warmer months here in Buffalo, I enjoy kayaking.

When forced indoors, I follow a number of Sci-Fi and Adventure genre movies and television shows, and I spend a large amount of my free time exploring the latest technology advancements happening in the world.


Resume

Want to know more about me? You can download my Resume by clicking the button below:

Download my Resume

Awards & Certifications

  • Research Track (in the field of AI & Distributed Systems) & Honors Degree Student - University at Buffalo.
  • 5th Rank Holder (Gold Medalist) in Bachelor's Degree - Osmania University.
  • Meta Hackercup 2024, Round 2 Qualifier.
  • Google Code Jam 2022, Round 2 Qualifier.
  • Google Kickstart 2022, Round C Qualifier.
  • Meta Hackercup 2021, Round 2 Qualifier.
  • Expert on Codeforces.
  • Guardian and top 1% on LeetCode.
  • 4-star rated coder on Codechef.
  • Virtusa NeuralHack 2021 Finalist.
  • TechGig Code Gladiators 2021 Finalist.
  • JusPay Hackathon 2020 Finalist.
  • Silver Medal Holder in C, C++ Programming - NPTEL.
  • All India Rank - 1872, GATE EC 2022.