(234)Extractive Distillation I

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Distillation of a Complex Mixture. Part I: High Pressure Distillation Column Analysis: Modeling and Simulation

In this analysis, based on the bubble point method, a physical model was established clarifying the interactions (mass and heat) between the species present in the streams in circulation in the column. In order to identify the externally controlled operating parameters, the degree of freedom of the column was determined by using Gibbs phase rule. The mathematical model converted to Fortran code...

متن کامل

I-DIAG: From Community Discussion to Knowledge Distillation

I-DIAG is an attempt to understand how to take the collective discussions of a large group of people and distill the messages and documents into more succinct, durable knowledge. I-DIAG is a distributed environment that includes two separate applications, CyberForum and Consolidate. The goals of the project, the architecture of IDIAG, and the two applications are described here.

متن کامل

Synthesizing Representative I/O Workloads Using Iterative Distillation

Storage systems designers are still searching for better methods of obtaining representative I/O workloads to drive studies of I/O systems. Traces of production workloads are very accurate, but inflexible and difficult to obtain. (Privacy and performance concerns discourage most system administrators from collecting such traces and making them available to the public.) The use of synthetic work...

متن کامل

Dropout distillation

Dropout is a popular stochastic regularization technique for deep neural networks that works by randomly dropping (i.e. zeroing) units from the network during training. This randomization process allows to implicitly train an ensemble of exponentially many networks sharing the same parametrization, which should be averaged at test time to deliver the final prediction. A typical workaround for t...

متن کامل

Policy Distillation

Policies for complex visual tasks have been successfully learned with deep reinforcement learning, using an approach called deep Q-networks (DQN), but relatively large (task-specific) networks and extensive training are needed to achieve good performance. In this work, we present a novel method called policy distillation that can be used to extract the policy of a reinforcement learning agent a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: The Journal of the Society of Chemical Industry, Japan

سال: 1952

ISSN: 0023-2734,2185-0860

DOI: 10.1246/nikkashi1898.55.492