Entwicklerinformationen
|
hbase-bin | Commandline interface to HBase, a database for large data | Mehr ... |
Commandline interface to HBase. . HBase is the Hadoop database. It hosts very large tables (think petabytes) -- billions of rows X millions of columns -- atop clusters of commodity hardware. It's modeled after Google's Bigtable. . * Convenient base classes for backing Hadoop MapReduce jobs with HBase tables * Query predicate push down via server side scan and get filters * Optimizations for real time queries * A high performance Thrift gateway * A REST-ful Web service gateway that supports XML, Protobuf, and binary data encoding options * Cascading source and sink modules * Extensible jruby-based (JIRB) shell * Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX * No HBase single point of failure * Rolling restart for configuration changes and minor upgrades * Random access performance on par with open source relational databases such as MySQL . The Hbase shell has a run-time dependency on jruby. Jruby is in non-free, see Debian Bug #551618. When this bug is resolved, jruby can be moved to free and jruby becommes a dependency. . For your first steps with HBase you may install the package hbase-regionmasterd additionally to this package. |
hbase-daemons-common | Creates user and directories for HBase daemons | Mehr ... |
Prepares some common things for all HBase daemon packages: * creates data and log directories owned by the hadoop user * manages the update-alternatives mechanism for hadoop configuration * brings in the common dependencies of the daemons |
hbase-masterd | The HBase master coordinates the regionservers. | Mehr ... |
HBase is the Hadoop database. It hosts very large tables (think petabytes) -- billions of rows X millions of columns -- atop clusters of commodity hardware. It's modeled after Google's Bigtable. . The HBase master must be installed once per HBase cluster. . The default configuration starts the HBase master daemon in a pseudo distributed mode. In this mode a regionserver is started inside the master and you don't need to install the separate hbase-regionserverd package. . For HBase to work it needs to reach a zookeeper node. See the zookeeperd package. The default configuration expects zookeeper to run on localhost. |
hbase-regionserverd | HBase regionserver to be installed on each node | Mehr ... |
HBase is the Hadoop database. It hosts very large tables (think petabytes) -- billions of rows X millions of columns -- atop clusters of commodity hardware. It's modeled after Google's Bigtable. . Each node of a HBase cluster runs a regionserver. |
libhbase-java | HBase (Hadoop database) java library | Mehr ... |
HBase is the Hadoop database. It hosts very large tables (think petabytes) -- billions of rows X millions of columns -- atop clusters of commodity hardware. It's modeled after Google's Bigtable. . * Convenient base classes for backing Hadoop MapReduce jobs with HBase tables * Query predicate push down via server side scan and get filters * Optimizations for real time queries * A high performance Thrift gateway * A REST-ful Web service gateway that supports XML, Protobuf, and binary data encoding options * Cascading source and sink modules * Extensible jruby-based (JIRB) shell * Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX * No HBase single point of failure * Rolling restart for configuration changes and minor upgrades * Random access performance on par with open source relational databases such as MySQL |
libhbase-java-doc | Contains the javadoc for the HBase (Hadoop database) java library. | Mehr ... |
HBase is the Hadoop database. It hosts very large tables (think petabytes) -- billions of rows X millions of columns -- atop clusters of commodity hardware. It's modeled after Google's Bigtable. . * Convenient base classes for backing Hadoop MapReduce jobs with HBase tables * Query predicate push down via server side scan and get filters * Optimizations for real time queries * A high performance Thrift gateway * A REST-ful Web service gateway that supports XML, Protobuf, and binary data encoding options * Cascading source and sink modules * Extensible jruby-based (JIRB) shell * Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX * No HBase single point of failure * Rolling restart for configuration changes and minor upgrades * Random access performance on par with open source relational databases such as MySQL |