The volume of information that companies use has increased tremendously, and continues to steadily grow. This brings about demand for database development. There has even been a new term coined in the English language – “Big Data” – to mean a body of data too large and complex to be processed by existing ready-made replicable DBMS (database management system) instruments. Big Data requires the latest hardware advancements and specially created tailor-made software.
Contemporary science has not yet found a universal definition of the term “database”. Different interpretations show the difference of approaches. We shall keep to the following definition: a database is an aggregate of structured and systematic data, in digital form, on an electronic media, and subject to a certain scheme. Databases can be classified according to different criteria. Based on their contents, they can be scientific, multimedia, customer intelligence, geographic, historical, etc.
According to storage media:
|Traditional – stored in external energy-independent memory|
|In-memory databases – primarily resides in the main memory|
|Databases on detachable mass storage hardware|
According to their distribution in space databases can be “centralized” (functioning entirely on one PC) and “distributed”, the parts of which can be located on distant LAN hosts. Additionally there are spatial-temporal, spatial, temporal and round-robin databases.
One should distinguish between the terms “database” and “database management system” (DBMS), which means a set of software instruments enabling users to create, maintain and access personal databases.
According to access mode, DBMS can be classified into:
EDISON specialists have repeatedly created databases which:
- Contain tens of millions of entries
- Have several million users
- Contain formats specially developed by us that are transferrable only in RAID-arrays
- Are so large that they can only be stored in a cloud