Research authors: Javier S. Turek (Intel Labs), Shailee Jain (University of Texas, Austin), Vy A. Vo (Intel Labs), Mihai Capotă (Intel Labs), Alexander G. Huth (University of Texas, Austin), Theodore L. Willke (Intel Labs)
Abstract: Recent work has shown that topological enhancements to recurrent neural networks (RNNs) can increase their expressiveness and representational capacity. Researchers explored the delayed-RNN, which is a single-layer RNN that has a delay between the input and output. They prove that a weight-constrained version of the delayed-RNN is equivalent to a stacked-RNN. They also show that the delay gives rise to partial acausality, much like bidirectional networks. Synthetic experiments confirm that the delayed-RNN can mimic bidirectional networks, solving some acausal tasks similarly, and outperforming them in others. Moreover, they show similar performance to bidirectional networks in a real-world natural language processing task
Research authors: Jonathan Mamou (Intel Labs), Hang Le (MIT), Miguel A. Del Rio (MIT), Cory Stephenson (Intel Labs), Hanlin Tang (Intel Labs), Yoon Kim (Harvard University), SueYeon Chung (MIT and Columbia University)
Abstract: Deep neural networks (DNNs) have shown much empirical success in solving perceptual tasks across various cognitive modalities. Researchers utilize mean-field theoretic manifold analysis, a recent technique from computational neuroscience that connects geometry of feature representations with linear separability of classes, to analyze language representations from large-scale contextual embedding models. They explore representations from different model families and find evidence for emergence of linguistic manifolds across layer depth, especially in ambiguous data. In addition, they find that the emergence of linear separability in these manifolds is driven by a combined reduction of manifolds’ radius, dimensionality, and inter-manifold correlations.
Research authors: Shauharda Khadka (Intel Labs), Somdeb Majumdar (Intel Labs), Santiago Miret (Intel Labs), Stephen McAleer (University of California, Irvine), Kagan Tumer (Oregon State University)
Abstract: Many cooperative multiagent reinforcement learning environments provide agents with a sparse team-based reward, as well as a dense agent-specific reward that incentivizes learning basic skills. Researchers introduce multiagent evolutionary reinforcement learning (MERL), a split-level training platform that handles the two objectives separately through two optimization processes. An evolutionary algorithm maximizes the sparse team-based objective through neuroevolution on a population of teams. Concurrently, a gradient-based optimizer trains policies to only maximize the dense agent-specific rewards. The gradient-based policies are periodically added to the evolutionary population as a way of information transfer between the two optimization processes. This enables the evolutionary algorithm to use skills learned via the agent-specific rewards toward optimizing the global objective. Results demonstrate that MERL significantly outperforms state-of-the-art methods, such as MADDPG, on a number of difficult coordination benchmarks.
Research authors: Yi-Ling Qiao (University of Maryland, College Park), Junbang Liang (University of Maryland, College Park), Vladlen Koltun (Intel Labs), Ming C. Lin (University of Maryland, College Park)
Abstract: Researchers developed a scalable framework for differentiable physics that can support a large number of objects and their interactions. To accommodate objects with arbitrary geometry and topology, they adopt meshes as a representation and leverage the sparsity of contacts for scalable differentiable collision handling. Collisions are resolved in localized regions to minimize the number of optimization variables even when the number of simulated objects is high. They further accelerate implicit differentiation of optimization with nonlinear constraints. Experiments demonstrate that the presented framework requires up to two orders of magnitude less memory and computation in comparison to recent particle-based methods. They further validate the approach on inverse problems and control scenarios, where it outperforms derivative-free and model-free baselines by at least an order of magnitude.
Research authors: Aleksei Petrenko (Intel Labs, University of Southern California) Zhehui Huang (University of Southern California), Tushar Kumar (University of Southern California), Gaurav Sukhatme (University of Southern California), Vladlen Koltun (Intel Labs)
Abstract: Increasing the scale of reinforcement learning experiments has allowed researchers to achieve unprecedented results in both training sophisticated agents for video games, and in sim-to-real transfer for robotics, but it is cost prohibitive. Researchers optimized the efficiency and resource utilization of reinforcement learning algorithms instead of relying on distributed computation. Using the Sample Factory, a high-throughput training system optimized for a single-machine setting, the architecture combined a highly efficient, asynchronous, GPU-based sampler with off-policy correction techniques. Researchers achieved throughput higher than 105 environment frames/second on non-trivial control problems in 3D without sacrificing sample efficiency.