iniVation AG invents, produces and sells neuromorphic technologies with a special focus on event-based vision into business. Slides by S. E. Jakobsen, board member of iniVation.
Prophesee (Formerly Chronocam) is the inventor and supplier of 4 Event-Based sensors generations, including commercial-grade versions as well as industry’s largest software suite. The company focuses on Industrial, Mobile-IoT and Automotive applications.
SLAMcore develops Localisation and mapping solutions for AR/VR, robotics & autonomous vehicles.
CelePixel (formerly Hillhouse Technology) offer integrated sensory platforms that incorporate various components and technologies, including a processing chipset and an image sensor (a dynamic vision sensor called CeleX).
Serrano-Gotarredona, R., Oster, M., Lichtsteiner, P., Linares-Barranco, A., Paz-Vicente, R., Gomez-Rodriguez, F., Riis, H.K., Delbruck, T., Liu, S.-H., Zahnd, S., Whatley, A.M., Douglas, R., Hafliger, P., Jimenez-Moreno, G., Civit, A., Serrano-Gotarredona, T., Acosta-Jimenez, A., Linares-Barranco, B., AER building blocks for multi-layer multi-chip neuromorphic vision systems,
Advances in neural information processing systems, 1217-1224, 2006.
Liu, S.-C. and Delbruck, T., Neuromorphic sensory systems,
Current Opinion in Neurobiology, 20:3(288-295), 2010.
Kirkland, P., Di Caterina, G., Soraghan, J., Matich, G., Neuromorphic technologies for defence and security,
SPIE vol 11540, Emerging Imaging and Sensing Technologies for Security and Defence V; and Advanced Manufacturing Technologies for Micro- and Nanosystems in Security and Defence III; 2020.
Delbruck, T., Fun with asynchronous vision sensors and processing.
Computer Vision - ECCV 2012. Workshops and Demonstrations. Springer Berlin/Heidelberg, 2012. A position paper and summary of recent accomplishments of the INI Sensors' group.
Lagorce, X., Ieng, S. H., Benosman, R., Event-based features for robotic vision,
IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2013, pp. 4214-4219.
Braendli, C., Strubel, J., Keller, S., Scaramuzza, D., Delbruck, T., ELiSeD - An Event-Based Line Segment Detector,
Int. Conf. on Event-Based Control Comm. and Signal Proc. (EBCCSP), 2016. PDF
Boettiger, J. P., MSc 2020,
A Comparative Evaluation of the Detection and Tracking Capability Between Novel Event-Based and Conventional Frame-Based Sensors.
Delbruck, T., Frame-free dynamic digital vision,
Int. Symp. on Secure-Life Electronics, Advanced Electronics for Quality Life and Society, pp. 21-26, 2008. PDF
Cook et al., IJCNN 2011, Interacting maps for fast visual interpretation. (Joint estimation of optical flow, image intensity and angular velocity with a rotating event camera).
Tschechne, S., Brosch, T., Sailer, R., von Egloffstein, N., Abdul-Kreem L.I., Neumann, H., On event-based motion detection and integration,
Int. Conf. Bio-inspired Information and Comm. Technol. (BICT), 2014. PDF
Gallego et al., CVPR 2018, A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth and Optical Flow Estimation.
Wang, Z. W., Jiang, W., He, K., Shi, B., Katsaggelos, A., Cossairt, O., Event-driven Video Frame Synthesis,
IEEE Int. Conf. Computer Vision Workshops (ICCVW), 2019. PDF
Belbachir, A.N., Litzenberger, M., Schraml, S., Hofstätter, M., Bauer, D., Schön, P., Humenberger, M., Sulzbachner, C., Lunden, T., Merne, M., CARE: A dynamic stereo vision sensor system for fall detection,
IEEE Int. Symp. Circuits and Systems (ISCAS), 2012.
Maqueda et al., CVPR 2018. Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars.
Gallego et al., CVPR 2018, A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth and Optical Flow Estimation.
Cook et al., IJCNN 2011, Interacting maps for fast visual interpretation. (Joint estimation of optical flow, image intensity and angular velocity with a rotating event camera).
Park, P.K.J., Kim, J.-S., Shin, C.-W, Lee, H., Liu, W., Wang, Q., Roh, Y., Kim, J., Ater, Y., Soloveichik, E., Ryu, H. E., Low-Latency Interactive Sensing for Machine Vision,
IEEE Int. Electron Devices Meeting (IEDM), 2019.
Sabater, A., Montesano, L., Murillo, A., Event Transformer. A sparse-aware solution for efficient event data processing,
IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2022. PDF, Supp. Video, Code.
Sekikawa, Y., Ishikawa, K., Hara, K., Yoshida, Y., Suzuki, K., Sato, I., Saito, H., Constant Velocity 3D Convolution,
IEEE Int. Conf. 3D Vision (3DV), 2018.
Zhu et al., RSS 2018, EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras.
Paredes-Valles et al., TPAMI 2019, Unsupervised Learning of a Hierarchical Spiking Neural Network for Optical Flow Estimation: From Events to Global Motion Perception.
Delbruck, T., Frame-free dynamic digital vision,
Proc. Int. Symp. on Secure-Life Electronics, Advanced Electronics for Quality Life and Society, 2008, 1:21–26. PDF
Vasco, V., Glover, A., Tirupachuri, Y., Solari, F., Chessa M., Bartolozzi C., Vergence control with a neuromorphic iCub,
IEEE Int. Conf. Humanoid Robotics (Humanoids), 2016.
Cohen, G., Afshar, S., van Schaik, A., Wabnitz, A., Bessell, T., Rutten, M., Morreale, B., Event-based Sensing for Space Situational Awareness,
Proc. Advanced Maui Optical and Space Surveillance Technologies Conf. (AMOS), 2017.
Scheerlinck, C., Rebecq, H., Stoffregen, T., Barnes, N., Mahony, R., Scaramuzza, D., CED: Color Event Camera Dataset,
IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2019. Slides, Video pitch.
Lee, A. J., Cho, Y., Yoon, S., Shin, Y., Kim, A., ViViD: Vision for Visibility Dataset,
IEEE Int. Conf. Robotics and Automation (ICRA) Workshop: Dataset Generation and Benchmarking of SLAM Algorithms for Robotics and VR/AR, 2019.
MNIST-DVS and FLASH-MNIST-DVS datasets are based on the original frame-based MNIST dataset. MNIST-DVS are DVS128 recordings of moving MNIST digits (at 3 scales), while FLASH-MNIST-DVS datasets are recorded by flashing the digits on a monitor.
POKER-DVS. From a set of DVS recordings of very fast poker card browsing, 32x32 pixel windows tracking the symbols are cropped. On average each symbol lasts about 10-30ms.
SLOW-POKER-DVS. Paper printed poker card symbols are moved at "human speed" in front of a DVS camera and recorded at 128x128 resolution.
CeleX5 ROS Wrapper A ROS driver and some other tools for CeleX5_MP event-based sensor (which has a high resolution at 1280×800)
Synchronization
Sync Toolbox. This open-source toolbox provides a QT-based GUI to allow easy access for hardware-level multi-sensor synchronization (Prophesee Gen 3.1 included and tested). After proper configuration of the software, users can seamlessly record new ROS bags.
BIMVEE Python tools for Batch Import, Manipulation, Visualisation and Export of Events and other timestamped data. Imports from various file formats into a common workspace format, including native Python import of rosbags.
Tonic provides publicly available event datasets and data transformations much like Torchvision/audio.
dv_ros ROS package for accumulating event frames with iniVation Dynamic Vision System's dv-sdk.
dvs_event_server ROS package used to transport "dvs/events" ROS topic to Python through protobuf and zmq, because Python ROS callback has a large delay.
AEStream A fast C++ library with a Python interface for streaming Address Event representations directly from Inivation and Prophesee cameras to various sources, such as STDOUT, UDP (network), or PyTorch.
AEDAT decoder A fast AEDAT 4 Python reader, with a Rust underlying implementation.
aedat-rs Standalone Rust library for decoding AEDAT 4 files for use in bespoke Rust event systems.
expelliarmus A pip-installable Python library to decode DAT, EVT2 and EVT3 files generated by Prophesee cameras to structured NumPy arrays.
Mahowald, M., VLSI Analogs of Neuronal Visual Processing: A Synthesis of Form and Function,
Ph.D. thesis, California Inst. Of Technology, Pasadena, CA, 1992. PDF
She won the Caltech's Clauser prize for the best PhD thesis for this work, which included the silicon retina, AER communication, and a beautiful stereopsis chip.