نتایج جستجو برای: hdfs
تعداد نتایج: 571 فیلتر نتایج به سال:
We harden the Hadoop Distributed File System (HDFS) against fail-silent (non fail-stop) behaviors that result from memory corruption and software bugs using a new approach: selective and lightweight versioning (SLEEVE). With this approach, actions performed by important subsystems of HDFS (e.g., namespace management) are checked by a second implementation of the subsystem that uses lightweight,...
Konstantin V. Shvachko is a principal software engineer at Yahoo!, where he develops HDFS. He specializes in efficient data structures and algorithms for large-scale distributed storage systems. He discovered a new type of balanced trees, S-trees, for optimal indexing of unstructured data, and he was a primary developer of an S-tree-based Linux file system, treeFS, a prototype of reiserFS. Kons...
In the last few days, data and internet have become increasingly growing, occurring in big data. For these problems, there are many software frameworks used to increase performance of distributed system. This is for available ample storage. One most beneficial utilize systems Hadoop. creates machine clustering formatting work between them. Hadoop consists two major components: Distributed File ...
In this study, we compared the gene expression profiles of non-syndromic hyperplastic dental follicle (HDF) fibroblasts and normal dental follicle (NDF) fibroblasts using cDNA microarrays, quantitative PCR, and immunohistochemical staining. Microarray analysis showed that several collagens genes were upregulated in the HDFos, including collagen types I, IV, VIII, and XI and TIMP-1, -3, and -4 (...
Storage systems are essential building blocks for cloud computing infrastructures. Although high performance storage servers are the ultimate solution for cloud storage, the implementation of inexpensive storage system remains an open issue. To address this problem, the efficient cloud storage system is implemented with inexpensive and commodity computer nodes that are organized into PC cluster...
This paper presents a duplication-less storage system over the engineering-oriented cloud computing platforms. Our deduplication storage system, which manages data and duplication over the cloud system, consists of two major components, a front-end deduplication application and a mass storage system as backend. Hadoop distributed file system (HDFS) is a common distribution file system on the cl...
A significant difficulty in developing a specialized Hadoop for cloud computing, and to tackle this problem, an anticipated safe computing system has been built. was utilized field develop improve the security of handling gathering data from users. Other Apache Big Data technologies include Hadoop, which uses Map-Reduce architecture prepare enormous amounts data. is one most tools dealing with ...
Software companies develop projects in various domains, but hardly archive the programs for future use. The method signatures are stored in the OWL and the source code components are stored in HDFS. The OWL minimizes the software development cost considerably. The design phase generates many artifacts. One such artifact is the UML class diagram for the project that consists of classes, methods,...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید