Hello there, and welcome to my website! My name is Samuel Schmidgall and I’m a researcher & engineer focused on applying AI to the field of medicine and medical robotics.
Here, we introduce the Surgical Robot Transformer (SRT). We explore whether surgical manipulation tasks can be learned on the da Vinci robot via imitation learning. We demonstrate our findings through successful execution of three surgical tasks, including tissue manipulation, needle handling, and knot-tying.
This perspective aims to provide a path toward increasing robot autonomy in robot-assisted surgery through the development of a multi-modal, multi-task, vision-language-action model for surgical robots.
This paper introduces SurGen, a text-guided diffusion model tailored for surgical video synthesis, producing the highest resolution and longest duration videos among existing surgical video generation models.
AgentClinic turns static medical QA problems into agents in a clinical environment in order to present a more clinically relevant challenge for multimodal language models.
This paper introduces GP-VLS, a general-purpose vision language model for surgery that integrates medical and surgical knowledge with visual scene understanding.
The addition of simple cognitive bias prompts significantly degrades performance. We introduce BiasMedQA to evaluate bias robustness on medical QA problems, and demonstrate mitigation techniques.
Surgical Gym is an open-source high performance platform for surgical robot learning where both the physics simulation and reinforcement learning occur directly on the GPU.
Autonomous surgical robots have the potential to transform surgery and increase access to quality health care. Advances in artificial intelligence have produced robots mimicking human demonstrations. This application might be feasible for surgical robots but is associated with obstacles in creating robots that emulate surgeon demonstrations.
This paper introduces large video dataset of surgery videos, a general surgery vision transformer (GSViT) pretrained on surgical videos, code and weights for procedure-specific fine-tuned versions of GSViT across 10 procedures.
This paper outline challenges and training needs of junior researchers working across AI and neuroscience. We also provide advice and resources to help trainees plan their NeuroAI careers.
We show that when a patient proposes incorrect bias-validating information, the diagnostic accuracy of LLMs drop dramatically, revealing a high susceptibility to errors in self-diagnosis.
This paper introduces the Synaptic Motor Adaptation (SMA) algorithm, a novel approach to achieving real-time online adaptation in quadruped robots through the utilization of neuroscience-derived rules of synaptic plasticity with three-factor learning.
We introduce a bi-level optimization framework that seeks to both solve online learning tasks and improve the ability to learn online using models of plasticity from neuroscience.
We translate the motor circuit of the C. Elegans nematode into artificial neural networks at varying levels of biophysical realism and evaluate the outcome of training these networks on motor and non-motor behavioral tasks.
We construct locked fronts for a particular piecewise linear reproduction function. These fronts are shown to be linear combinations of exponentially decaying solutions to the linear system near the unstable state.
We introduce a framework for simultaneously learning the underlying fixed-weights and the rules governing the dynamics of synaptic plasticity and neuromodulated synaptic plasticity in SNNs through gradient descent.
We present a trajectory planning method for multiple vehicles to navigate a crowded environment, such as a gridlocked intersection or a small parking area.
We show quadrupedal agents evolved using self-modifying plastic networks are more capable of adapting to complex meta-learning learning tasks, even outperforming the same network updated using gradient-based algorithms while taking less time to train.