The Subaru Telescope used by astronomers who collaborate with us produces as much as 300GB of data every night. Data processing called astronomical pipeline is necessary to obtain beautiful images by processing this large-scale data. This requires faster processing called transactions, which we are working on. Our transaction processing technology is tens to tens of thousands of times the performance of commercial systems. this is Joint research with Kavli IPMU, Institute of Statistical Mathematics, and NTT CS Laboratories. Thank you for your support from JST CREST.
A real-time database is required to realize a variety of real-time artificial intelligence that realizes a distribution system that fluctuates from time to time, a gravitational wave candidate celestial body that suddenly appears, and the immediate environment necessary for evacuation after the earthquake disaster. This system requires an HTAP (Hybrid Transactional & Analytical Processing) architecture that tightly combines transaction processing (OLTP) and data analysis (OLAP). This is a joint research with Nautilus Technologies, NEC, Pasco, Makoto Onizuka (Osaka Univ.), And Professor Keiji Ishikawa (Nagoya Univ.). We have received support from NEDO.
To realize a gateless ticket gate and a cashierless convenience store, a huge amount of billing processing is required. Because this billing process requires high reliability, a distributed agreement technology that synchronizes data on multiple machines is required. High-speed communication mechanism (Remote Designing technology using Direct Memory Access). This project is a joint research with a financial institution.
Non-volatile memory such as Intel Optane Persistency Memory (memory that does not lose its contents even when the power is turned off) has come out. Non-volatile memory is a new technology, and it is a device with an enormous number of unknowns about what applications it can support and how it should be designed. In joint research with a company that researches and develops non-volatile memory, we design a new memory architecture and a new data infrastructure that uses it.
We are researching the creation of a database system that provides various machine learning technologies, the index mechanism using deep learning, and the creation of query optimizers using deep learning. Supported by joint research with a company and Scientific Grant-in-Aid for Scientific Research B .
After a huge amount of data has been collected, there is a growing need to know what it means and how it relates to other data. Research on data profiling is underway to solve this problem. This study is focused on petabyte-scale data held by a company.