Reference material :http://www.cnblogs.com/sharpxiajun/p/5585613.html

The amount of data in the era of big data is very large , It is difficult to store and manage these data in traditional relational database , In order to store massive amounts of data , We have it. HDFS, It can aggregate the hard disks of thousands of servers into a super large hard disk , To make this data valuable , We have it. mapreduce, It can calculate the data of this super hard disk , In the face of such a large amount of data, we still have an urgent need, that is, how to quickly retrieve the data we want , And this function is created by hbase To undertake .

hbase Using index technology to quickly find the required data from massive data

hbase It just provides a computing model that can quickly retrieve massive data .

Reference material :http://www.cnblogs.com/sharpxiajun/archive/2013/06/02/3114180.html

HDFS:hadoop Distributed file system

hive Is based on Hadoop A data warehouse tool , A structured data file can be mapped to a database table , And provide complete sql Query function , Can be sql The statement is converted to MapReduce Task to run

hadoop and mapreduce Professional operation is too strong , therefore facebook On the basis of these, we developed the hive frame

Reference material :http://www.cnblogs.com/sharpxiajun/archive/2013/06/15/3137765.html

1. What is a distributed file system ?

A file system that manages storage across multiple computers in a network is called a distributed file system .

2. Why do you need a distributed file system ?

The reason is simple , When the size of the data set exceeds the storage capacity of an independent physical computer , It's necessary to partition it (partition) And stored on several separate computers .

3. Distributed systems are more complex than traditional file systems

Because the distributed file system is built on the network , So distributed systems introduce the complexity of network programming , So distributed file system is more complex than ordinary file system .

4.Hadoop File system

Many children's shoes will make hdfs Equivalent to hadoop File system , Actually hadoop Is a comprehensive file system abstraction , and hdfs yes hadoop Flagship file system ,hadoop except hdfs You can also integrate other file systems

Reference material :http://www.cnblogs.com/sharpxiajun/p/3151395.html

mapreduce yes hadoop Computing framework of

hdfs mapreduce hbase More articles about

  1. big data Hadoop Core architecture HDFS+MapReduce+Hbase+Hive Detailed explanation of internal mechanism

    WeChat official account [ Programmer's world ] Author Huang Xiaoxie , Slash the youth , some 985 master , Ali Java R & D Engineer , On 2018 In autumn, I got BAT headlines . NetEase . Didi et al 8 A big factory offer, At present, we are committed to sharing the learning experience of these years . ...

  2. Hadoop Core architecture HDFS+MapReduce+Hbase+Hive Detailed explanation of internal mechanism

    from :http://blog.csdn.net/iamdll/article/details/20998035 classification : Distributed 2014-03-11 10:31 156 Human reading Comment on (0) Collection report ...

  3. Chapter 11 : Hadoop Core architecture HDFS+MapReduce+Hbase+Hive Detailed explanation of internal mechanism

    HDFS The architecture of Whole Hadoop The architecture of is mainly through HDFS To achieve the underlying support for distributed storage , And pass MR To realize the program support for distributed parallel task processing . HDFS Use master-slave (Master/Slave) Structural model , One ...

  4. HDFS,MapReduce,Hive,Hbase The relationship between... Etc

    HDFS: HDFS yes GFS An implementation of , His full name is distributed file system , Be similar to FAT32,NTFS, It's a file format , It's the bottom . Hive And Hbase Most of the data is stored in HDFS On .Hadoop HDFS by ...

  5. utilize Sqoop take MySQL Massive test data import HDFS and HBase

    Statement : The author is original , Reprint with reference to . author : Handsome Chen eats apples One . install Sqoop 1. download sqoop, decompression . Folder rename wget http://mirror.bit.edu.cn/apache/sqoop/1 ...

  6. Hdfs&MapReduce test

    Hdfs&MapReduce test test Upload files to hdfs Open a folder and transfer a file at will ( hold javafx-src.zip to hdfs Of / The root directory ):hadoop fs -put javaf ...

  7. Sqoop_ Specific summary Use Sqoop take HDFS/Hive/HBase And MySQL/Oracle The data in are imported into each other 、 export

    One . Use Sqoop take MySQL Import data from to HDFS/Hive/HBase watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvYWFyb25oYWRvb3A=/ ...

  8. HDFS+MapReduce+Hive+HBase Ten minutes to get started

    1.      Preface The purpose of this article is to make a person who has never touched Hadoop People who , Get started quickly in a short time , Master compilation . Installation and simple use . 2.     Hadoop family end 2009-8-19 Japan , Whole Hadoop The family is made up of ...

  9. The same version of CDH Migration between clusters hdfs as well as hbase

    Preface Due to the need of project data security , Take a look at this time hadoop Of distcp Command to use , Constantly tangled ask Du Niang , The result that Du Niang told me also made me very tangled , It's all copying , Fortunately, on the basis of sacrificing a lot of time, it finally came out , By the way, write this ...

Random recommendation

  1. crontab -e Back up regularly every day mysql

    contab -e 00 03 * * * mysqldump -u juandx --password=wenbin -d 'juandx$blog' -h host > /home/juan ...

  2. json turn bean object

    It's a personal collection , For the next time . From the front json The format is : [{"suppliercode":"gylhld_gycqlt3_gycqlt1","pro ...

  3. Back to the top jog code --- tween Animation function library

    function animateGoTop() { var top = $(document).scrollTop(); var end = 0; var dur = 500; var t = 0; ...

  4. sql server 2008 Data replication methods

    There are two steps to publish and subscribe :1. Release .2. subscribe . First, publish the data that needs to be synchronized on the data source database server , Then subscribe to the above publication on the target database server . Publishing can publish part of the data of a table , You can also publish the entire table . The following are ...

  5. One 、MP3 Document overview

    One . summary MP3 The file is made up of frames (frame) Composed of , Frames are MP3 The smallest component of a document .MP3 The full name of should be MPEG1 Layer-3 Audio file ,MPEG(Moving Picture Experts G ...

  6. Eclipse/MyEclipse The solution of using copy and paste function card in

    Recently MyEclipse When editing code in , Use the shortcut keys to copy and paste , It often leads to short pauses in the editor , The cursor does not follow , I can't react , Almost feign death . Think about what configuration should be on the editor or IDE What's the function of that , Then enter the Pr ...

  7. linux centos7 install redis

    First look at the official tutorial :http://redis.io/download Download, extract and compile Redis with: $ wget http://download.re ...

  8. Apache Shiro The core concept

    from :http://blog.csdn.net/peterwanghao/article/details/8015571 Shiro There are three core concepts in the framework :Subject ,SecurityManage ...

  9. LG2292 L Language

    The question give \(n\) Word , Give more \(m\) An article without a sign , Ask for the longest matching prefix for each passage . Ideas set up \(f[i]\) The prefix \([1,i]\) Can be matched , For a node that can be matched \(i\), If you have any \([i ...

  10. company_credit

    /** * Created by wu-yj on 2016/5/6. */ import java.sql.{Connection, DriverManager, PreparedStatement ...