{"id":96,"date":"2023-05-19T02:50:59","date_gmt":"2023-05-19T02:50:59","guid":{"rendered":"https:\/\/2023.cogsima.org\/?page_id=96"},"modified":"2023-10-12T20:21:24","modified_gmt":"2023-10-12T20:21:24","slug":"focus-sessions","status":"publish","type":"page","link":"https:\/\/2023.cogsima.org\/index.php\/focus-sessions\/","title":{"rendered":"Focus Sessions"},"content":{"rendered":"\n<h2 class=\"wp-block-heading has-large-font-size\">Focus Session #1 &#8211; Neuroergonomics<\/h2>\n\n\n\n<p><strong>Overview<\/strong>: Learn about the newly emerging field of neuroergonomics and how wearables and other devices can enhance your research through enabling various capabilities such as human-aware autonomy, cognitively aided design, and ultimately cognitive situation management.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading has-large-font-size\">Focus Session #2 &#8211; Semantic Forensics<\/h2>\n\n\n\n<p><strong>Overview<\/strong>: Learn about the state of the art of detecting mis- and disinformation through this focus session highlighting the algorithms and techniques developed on the DARPA SemaFor program. This session will include an overview of the DAC model of detection, attribution, and characterization of manipulate media, a view into the interfaces being developed for operational use, and deep dives into newly developed algorithms going up against the latest versions of generative AI techniques.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading has-large-font-size\">Focus Session #3 &#8211; University of Pennsylvania GRASP lab<\/h2>\n\n\n\n<p><strong>Overview<\/strong>: The General Robotics, Automation, Sensing and Perception (GRASP) Laboratory is an interdisciplinary academic and research center within the School of Engineering and Applied Sciences at the University of Pennsylvania. Founded in 1979, the GRASP Lab is a premier robotics incubator that fosters collaboration between students, research staff and faculty focusing on fundamental research in vision, perception, control systems, automation, and machine learning. This focus session will highlight relevant and cutting-edge work the GRASP lab has been doing in the field of Situation Management.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Talks<\/h2>\n\n\n\n<p><strong>Title<\/strong>: A Picture of the Prediction Space of Deep Networks<\/p>\n\n\n\n<p><strong>Presenter<\/strong>: Pratik Chaudhari<\/p>\n\n\n\n<p><strong>Abstract<\/strong>: Deep networks have many more parameters than the number of training data and can therefore overfit&#8212;and yet, they predict remarkably accurately in practice. Training such networks is a high-dimensional, large-scale and non-convex optimization problem and should be prohibitively difficult&#8212;and yet, it is quite tractable. This talk aims to illuminate these puzzling contradictions.<\/p>\n\n\n\n<p>We will argue that deep networks generalize well because of a characteristic structure in the space of learnable tasks. The input correlation matrix for typical tasks has a \u201csloppy\u201d eigenspectrum where, in addition to a few large eigenvalues, there is a large number of small eigenvalues that are distributed uniformly over a very large range. As a consequence, the Hessian and the Fisher Information Matrix of a trained network also have a sloppy eigenspectrum. Using these ideas, we will demonstrate an analytical non-vacuous PAC Bayes generalization bound for general deep networks.<\/p>\n\n\n\n<p>We will next develop information-geometric techniques to analyze the trajectories of the predictions of deep networks during training. By examining the underlying high-dimensional probabilistic models, we will reveal that the training process explores an effectively low dimensional manifold. Networks with a wide range of architectures, sizes, trained using different optimization methods, regularization techniques, data augmentation techniques, and weight initializations lie on the same manifold in the prediction space. We will also show that predictions of networks being trained on different tasks (e.g., different subsets of ImageNet) using different representation learning methods (e.g., supervised, meta-, semi supervised and contrastive learning) also lie on a low-dimensional manifold.<\/p>\n\n\n\n<p><strong>References<\/strong>:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>1. Does the data induce capacity control in deep learning? Rubing Yang, Jialin Mao, and Pratik Chaudhari. [ICML &#8217;22]&nbsp;<a href=\"https:\/\/urldefense.com\/v3\/__https:\/arxiv.org\/abs\/2110.14163__;!!IBzWLUs!W8ESMKUAV-RklpS-TOR3GN1f1Z_TvEEvz5bcozCFJz3EzX7Jk6oYsMkRavRbZJji7yBhDU2MP9RxspWtaXeyYXU$\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/arxiv.org\/abs\/2110.14163<\/a><\/li>\n\n\n\n<li>2. Deep Reference Priors: What is the best way to pretrain a model? Yansong Gao, Rahul Ramesh, and Pratik Chaudhari. [ICML &#8217;22]&nbsp;<a href=\"https:\/\/urldefense.com\/v3\/__https:\/arxiv.org\/abs\/2202.00187__;!!IBzWLUs!W8ESMKUAV-RklpS-TOR3GN1f1Z_TvEEvz5bcozCFJz3EzX7Jk6oYsMkRavRbZJji7yBhDU2MP9RxspWt1WFY5_w$\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/arxiv.org\/abs\/2202.00187<\/a><\/li>\n\n\n\n<li>3. A picture of the space of typical learnable tasks. Rahul Ramesh, Jialin Mao, Itay Griniasty, Rubing Yang, Han Kheng Teoh, Mark Transtrum, James P. Sethna, and Pratik Chaudhari [ICML \u201923].&nbsp;<a href=\"https:\/\/urldefense.com\/v3\/__https:\/arxiv.org\/abs\/2210.17011__;!!IBzWLUs!W8ESMKUAV-RklpS-TOR3GN1f1Z_TvEEvz5bcozCFJz3EzX7Jk6oYsMkRavRbZJji7yBhDU2MP9RxspWtasXrRig$\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/arxiv.org\/abs\/2210.17011<\/a><\/li>\n\n\n\n<li>4. The Training Process of Many Deep Networks Explores the Same Low-Dimensional Manifold. Jialin Mao, Itay Griniasty, Han Kheng Teoh, Rahul Ramesh, Rubing Yang, Mark K. Transtrum, James P. Sethna, Pratik Chaudhari. 2023.&nbsp;<a href=\"https:\/\/urldefense.com\/v3\/__https:\/arxiv.org\/abs\/2305.01604__;!!IBzWLUs!W8ESMKUAV-RklpS-TOR3GN1f1Z_TvEEvz5bcozCFJz3EzX7Jk6oYsMkRavRbZJji7yBhDU2MP9RxspWt_oxWTN0$\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/arxiv.org\/abs\/2305.01604<\/a><\/li>\n<\/ul>\n\n\n\n<p><strong>Bio<\/strong>: Pratik Chaudhari is an Assistant Professor in Electrical and Systems Engineering and Computer and Information Science at the University of Pennsylvania. He is a core member of the GRASP Laboratory. From 2018-19, he was a Senior Applied Scientist at Amazon Web Services and a Postdoctoral Scholar in Computing and Mathematical Sciences at Caltech. Pratik received his PhD (2018) in Computer Science from UCLA, and his Master&#8217;s (2012) and Engineer&#8217;s (2014) degrees in Aeronautics and Astronautics from MIT. He was a part of NuTonomy Inc. (now Hyundai-Aptiv Motional) from 2014-16. He is the recipient of the Amazon Machine Learning Research Award (2020), NSF CAREER award (2022) and the Intel Rising Star Faculty Award (2022).<\/p>\n\n\n\n<p>===============================<\/p>\n\n\n\n<p><strong>Title<\/strong>: Composable Representations for Lifelong Learning in Autonomous Systems<\/p>\n\n\n\n<p><strong>Presenter<\/strong>: Eric Eaton, PhD\u00a0University of Pennsylvania<\/p>\n\n\n\n<p><strong>Abstract<\/strong>: Lifelong\u00a0learning\u00a0is a key characteristic\u00a0of\u00a0human intelligence, largely responsible for the variety\u00a0and\u00a0complexity\u00a0of\u00a0our behavior.\u00a0 This\u00a0process allows us to rapidly learn new skills by building upon\u00a0and\u00a0continually refining our learned knowledge over a lifetime\u00a0of\u00a0experience. Over the past few years, there has been rapid progress toward developing these capabilities, with composable representations showing\u00a0exceptional promise for\u00a0enabling lifelong learning. In this talk, I will discuss this progress and its application to autonomous systems, examining how far we have come and\u00a0the open problems that still remain toward realizing the goal of lifelong machine learning.<\/p>\n\n\n\n<div class=\"wp-block-media-text is-stacked-on-mobile\" style=\"grid-template-columns:32% auto\"><figure class=\"wp-block-media-text__media\"><img decoding=\"async\" src=\"https:\/\/www.seas.upenn.edu\/~eeaton\/images\/headshot-Penn-Eric.jpg\" alt=\"\"\/><\/figure><div class=\"wp-block-media-text__content\">\n<p><strong>Bio<\/strong>:\u00a0Eric\u00a0Eaton\u00a0is\u00a0a\u00a0research\u00a0associate\u00a0professor in the Department of Computer and Information Science at the University of Pennsylvania, and\u00a0a\u00a0member of the GRASP (General Robotics, Automation, Sensing, &amp; Perception) lab. He also has\u00a0a\u00a0secondary appointment in biomedical and health informatics at Children\u2019s Hospital of Philadelphia.\u00a0His primary\u00a0research\u00a0interests lie in the field of machine learning and interactive AI, with applications to service robotics and personalized medicine. In particular, his\u00a0research\u00a0focuses on developing versatile AI systems that can learn multiple tasks over\u00a0a\u00a0lifetime of experience in complex\u00a0environments, transfer learned knowledge to rapidly acquire new abilities, and collaborate effectively with humans and other agents through\u00a0interaction.<\/p>\n<\/div><\/div>\n\n\n\n<p>==============<\/p>\n\n\n\n<p><strong>Talk title:<\/strong>\u00a0Attentive Abstractions for Flexible Vision-Based Robot Learners<\/p>\n\n\n\n<p><strong>Author<\/strong>: Dinesh Jayaraman<\/p>\n\n\n\n<p><strong>Abstract:<\/strong>&nbsp;General-purpose robots of the future will need vision and learning, but such vision-based robot learning today is inflexible and inefficient: it needs robot-and-task-specific training experiences, expert-engineered task specifications, and large computational resources. This talk will cover algorithms that dynamically select task-relevant information during sensing, representation, decision making, and learning, enabling flexibilities in pre-training controller modules, layperson-friendly task specification, and efficient resource allocation. I will speak about our work on interactive perception of task rewards for RL, pre-trained object-centric visual representations that track task-directed progress, and task-relevant world model learning for model-based RL.<\/p>\n\n\n\n<p><strong>Bio:<\/strong>&nbsp;Dinesh Jayaraman is an assistant professor at the University of Pennsylvania&#8217;s CIS department and GRASP lab. He leads the Perception, Action, and Learning&nbsp;&nbsp;(Penn PAL)&nbsp;research group, which works at the intersections of computer vision, robotics, and machine learning. Dinesh&#8217;s research has received a Best Paper Award at CORL &#8217;22, a Best Paper Runner-Up Award at ICRA &#8217;18, a Best Application Paper Award at ACCV \u201816, an Amazon Research Award &#8217;21, the NSF CAREER award &#8217;23, and been featured on the cover page of Science Robotics and in several press outlets. His webpage is at:&nbsp;<a href=\"https:\/\/www.seas.upenn.edu\/~dineshj\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.seas.upenn.edu\/~dineshj\/<\/a><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n","protected":false},"excerpt":{"rendered":"<p>Focus Session #1 &#8211; Neuroergonomics Overview: Learn about the newly emerging field of neuroergonomics and how wearables and other devices can enhance your research through enabling various capabilities such as human-aware autonomy, cognitively aided design, and ultimately cognitive situation management. Focus Session #2 &#8211; Semantic Forensics Overview: Learn about the state of the art of detecting mis- and disinformation through<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-96","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/2023.cogsima.org\/index.php\/wp-json\/wp\/v2\/pages\/96","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/2023.cogsima.org\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/2023.cogsima.org\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/2023.cogsima.org\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/2023.cogsima.org\/index.php\/wp-json\/wp\/v2\/comments?post=96"}],"version-history":[{"count":5,"href":"https:\/\/2023.cogsima.org\/index.php\/wp-json\/wp\/v2\/pages\/96\/revisions"}],"predecessor-version":[{"id":299,"href":"https:\/\/2023.cogsima.org\/index.php\/wp-json\/wp\/v2\/pages\/96\/revisions\/299"}],"wp:attachment":[{"href":"https:\/\/2023.cogsima.org\/index.php\/wp-json\/wp\/v2\/media?parent=96"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}