pyBrain trainUntilConvergence... help me understand this function please -


i have several newbie questions trainuntilconvergence in pybrain.

trainuntilconvergence divides data set training , validation sets (defaults 25% used testing). correct?

is error reported (when verbose=true) after each epoch error on validation set or error against training set?

is network considered converged (thus stopping execution) when validation set's error no longer reducing? or when error on training set no longer reducing? (i assume it's former else why use portion validation?)

is section of data chosen validation contiguous (e.g. last x% of data set) or choose x% of rows @ random data?

thanks!

according documentation trainuntilconvergence, takes in several parameters. it's shown below. yes defaults 25% used validation set.

trainuntilconvergence(dataset=none, maxepochs=none, verbose=none, continueepochs=10, validationproportion=0.25) 

you can change validationproportion parameters other values see fit. proportion of validation set debatable , has no one-value-fits-all proportion. need try them fit case.

the trainuntilconvergencemethod trains on data set until error on validation set no longer decreasing number of epochs. can vary number of epochs trainer considers before stopping training changing continueepochs parameter. defaults 10. in other words, if error on validation set not improve in 10 consecutive epochs, training terminated. known early stopping method , used in training neural nets.

regarding contiguous of validation set i'm not sure it. logically should select on random picks.


Comments

Popular posts from this blog

sublimetext3 - what keyboard shortcut is to comment/uncomment for this script tag in sublime -

java - No use of nillable="0" in SOAP Webservice -

ubuntu - Laravel 5.2 quickstart guide gives Not Found Error -