نتایج جستجو برای: net learning

تعداد نتایج: 693837  

2002
W A van Leeuwen B Wemmenhove

A recurrent neural net is described that learns a set of patterns {ξ µ } in the presence of noise. The learning rule is of a Hebbian type, and, if noise would be absent during the learning process, the resulting final values of the weights w ij would correspond to the pseudo-inverse solution of the fixed point equation in question. For a non-vanishing noise parameter, an explicit expression for...

1999
A. A Safavi M. R. Kharazmi J. A. Romagnoli

Abstract . The technological advancement in the computer controlled systems and other areas in the process industries makes available continuous stream of data to be exploited utilizing on-line systems environment. This paper presents a procedure for on-line learning with wave-nets with the aid of such a stream of data. Wave-nets are wavelet-based neural networks with localized and hierarchical...

2017
Michael D. Godlevsky Sergey V. Orekhov Elena Orekhova

The theoretical basis of search engine optimization (SEO) process, metrics of its efficiency and the algorithm of performance were proposed. It is based on the principles of situation control, machine learning, semantic net building, data mining and service oriented architecture as IT solution. The main idea of the scientific work is the use of situation control as a learning method for search ...

Journal: :Int. J. Web Eng. Technol. 2007
Dragan Gasevic Vladan Devedzic

The paper presents a Petri net infrastructure that should allow sharing Petri nets on the Semantic Web. Previous solutions only provide model interchange mechanisms between Petri net tools. The Petri net ontology is a central part of our solution. The ontology is closely related to the Petri Net Markup Language (PNML) – an ongoing Petri net community sharing effort. We developed the Petri net o...

1994
Sreerupa Das Michael C. Mozer

Although recurrent neural nets have been moderately successful in learning to emulate nite-state machines (FSMs), the continuous internal state dynamics of a neural net are not well matched to the discrete behavior of an FSM. We describe an architecture, called DOLCE, that allows discrete states to evolve in a net as learning progresses. dolce consists of a standard recurrent neural net trained...

2007
Kazuo Kiguchi Hui He Kenbu Teramoto

Reinforcement learning is one of the most important learning methods for intelligent robots working in unknown/uncertain environments. Multi-dimensional fuzzy Q-learning, an extension of the Q-learning method, has been proposed in this study. The proposed method has been applied for an intelligent robot working in a dynamic environment. The rewards from the evaluation functions and the fuzzy Q-...

2005
Goran Šimić Dragan Gašević Vladan Devedžić

This chapter emphasizes integration of Semantic Web technologies in intelligent learning systems by giving a proposal for an intelligent learning management system (ILMS) architecture we named Multitutor. This system is a Web-based environment for the development the e-learning courses and for the use of them by the students. Multitutor is designed as a Web-classroom client-server system, ontol...

1993
Sreerupa Das Michael C. Mozer

Although recurrent neural nets have been moderately successful in learning to emulate finite-state machines (FSMs), the continuous internal state dynamics of a neural net are not well matched to the discrete behavior of an FSM. We describe an architecture, called DOLCE, that allows discrete states to evolve in a net as learning progresses. DOLCE consists of a standard recurrent neural net train...

2016
Uri Heinemann Roi Livni Elad Eban Gal Elidan Amir Globerson

Neural networks have recently re-emerged as a powerful hypothesis class, yielding impressive classification accuracy in multiple domains. However, their training is a non-convex optimization problem which poses theoretical and practical challenges. Here we address this difficulty by turning to “improper” learning of neural nets. In other words, we learn a classifier that is not a neural net but...

Journal: :CoRR 2017
Ke Li Jitendra Malik

Learning to Optimize (Li & Malik, 2016) is a recently proposed framework for learning optimization algorithms using reinforcement learning. In this paper, we explore learning an optimization algorithm for training shallow neural nets. Such high-dimensional stochastic optimization problems present interesting challenges for existing reinforcement learning algorithms. We develop an extension that...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید