Academic Experts
Academic Experts
Dr. Divya Kaushik
ASSISTANT PROFESSOR (SR GRADE)
divya.kaushik@mail.jitt.ac.in
Biography

Dr. Divya Kaushik is an Assistant Professor in the Department of Electronics and Communication Engineering at Jaypee Institute of Information Technology (JIIT), Noida, with over eleven years of combined teaching and research experience. She earned her Ph.D. in Electrical Engineering from the Indian Institute of Technology (IIT) Delhi, an M.Tech. in Embedded Systems from the National Institute of Technology (NIT) Kurukshetra, and a B.Tech. in Electronics and Communication Engineering from Kurukshetra University. Dr. Kaushik’s scholarship sits at the intersection of device physics and machine intelligence — she focuses on energy-efficient neuromorphic hardware and algorithm-to-hardware co-design for next-generation intelligent systems. Her doctoral research, titled “On-chip Learning in a Spintronics-based Hardware Neural Network: A Device-Circuit-System Co-study,” examined on-chip learning mechanisms using spintronic synapses and proposed architectures that balance scalability, energy efficiency, and training feasibility. Dr. Kaushik has demonstrated expertise in micromagnetic and circuit simulation tools (Mumax3, Cadence Virtuoso) and is fluent in MATLAB and Python for algorithm development and data analysis. Her work has been recognized internationally — she presented at venues such as the Magnetism and Magnetic Materials (MMM) conference and received the AIP Advances in Magnetism Award (Best Paper, 2020). Beyond research, she contributes to academic development through FDP coordination and active participation in professional committees and peer review. Her profile combines strong theoretical foundations with practical device- and system-level engineering, making her research directly relevant to low-power AI hardware and neuromorphic computing initiatives.

Research Highlights

Dr. Kaushik’s research program targets energy-efficient neuromorphic computing through a device-to-system co-design methodology. A central theme is on-chip learning implemented using emerging non-volatile devices — notably spintronic elements such as domain-wall synapses and skyrmionic devices — and comparing their performance with RRAM and PCM alternatives. Her work systematically evaluates synapse cell optimization, the hardware feasibility of backpropagation-style learning in analog crossbar arrays, and strategies to mitigate device non-idealities. At IIT Delhi she led projects on spintronic non-spiking neural networks that demonstrated spin-orbit torque driven domain-wall synapses for low-energy training and inference, while also proposing MOSFET-based analog neural circuits as silicon-friendly alternatives. Key contributions include (i) synapse cell design optimizations that reduce energy and area, (ii) mapping of algorithmic learning rules to hardware-amenable implementations, and (iii) comparative studies that quantify trade-offs across device technologies (energy, latency, endurance, variability). Her publications report on crossbar-array implementations of convolutional networks with on-chip learning and low-energy feedforward networks using spintronic devices, illustrating both the promise and practical challenges of neuromorphic hardware. Methodologically, she combines micromagnetic simulation (Mumax3), circuit-level design (Cadence), and algorithmic prototyping (MATLAB/Python) to close the loop from device physics to system performance. The research advances both fundamental understanding and pragmatic design rules for building scalable, low-power neuromorphic systems suitable for edge AI applications.

Areas Of Interest
  • Neuromorphic Computing and On-chip Learning.
  • Spintronics for Memory and Synapse Devices (domain wall
  • skyrmions).
  • Analog Hardware Neural Networks and Crossbar Architectures.
  • Device-to-Algorithm Co-design (algorithm-hardware mapping).
  • Low-energy VLSI for AI / Edge AI Hardware.
  • Micromagnetic and Circuit Simulation (Mumax3
  • Cadence Virtuoso).
  • Machine Learning (implementation and hardware-friendly algorithms).
Publications

D. Kaushik, J. Sharda, and D. Bhowmik, “Synapse cell optimization and back-propagation algorithm implementation in a domain wall synapse-based crossbar Neural Network for scalable on-chip learning,” Nanotechnology, vol. 31, no. 36, 2020.

D. Kaushik, U. Singh, U. Sahu, I. Sreedevi, and D. Bhowmik, “Comparing domain wall synapse with other non-volatile memory devices for on-chip learning in analog hardware neural network,” AIP Advances, vol. 10, no. 2, 025111, 2020.

V. B. Desai, D. Kaushik, J. Sharda, and D. Bhowmik, “On-chip learning of a domain-wall-synapse-crossbar-array-based convolutional neural network,” Neuromorphic Computing and Engineering, vol. 2, no. 2, 2022.

U. Saxena, D. Kaushik, M. Bansal, H. Chandel, U. Sahu, and D. Bhowmik, “Low energy implementation of feedforward neural network with backpropagation algorithm using a spin-orbit torque driven skyrmionic device,” IEEE International Magnetics, vol. 99, pp. 1–5, 2018.

S. Choudhary and D. Kaushik, “Understanding the effect of vacancy defects on spin transport in CrO2–graphene–CrO2 magnetic tunnel junction,” Modern Physics Letters B, vol. 30, no. 09, 1650102, 2016.