This plugin uses RapidMiner as the machine learning engine, and act just like a bridge between Icy and RapidMiner. Image data along Z,T,C axis are treated as different features, labels are marked using "Mask Editor". Both supervised learning and unsupervised learning are supported.
RapidMiner (formerly YALE) is a powerful and intuitive data mining tool implemented in Java and available under GPL (GNU General Public License). Can be used for machine learning, data mining, text mining, predictive analytics, and business analytics.
- Powerful, uses open source RapidMiner to do the real job
- Flexiable, user defined training and predict model, defined by process file generated from RapidMiner GUI
- Z,T,C axes are treated as feature
- Uses Mask Editor to define labelled data and unlabelled data for training, predicting result will show in masks
1. Download RapidMiner from http://sourceforge.net/projects/rapidminer/, unzip the file and copy the "lib" directory to "plugins" of Icy.
2. Learning Models are defined as process file, an xml file generated by RapidMiner.
For test purpose, some example xml file is included in the jar file of this plugin, you can extract the xml file from the jar file. The file "svm_train.xml" defines a svm classifier, for supervised learning, you should provide some labelled image with Mask Editor to train the classifier. "cluster_train.xml" defines a simple K-Means cluster, you can define training data using MaskEditor, all the mask will be taken into account. "predict.xml" defines a model applier, will work for both cluster and classifier in predicting stage.
If you want to define your own learing model, the following steps demonstrate how the process files are generated.
First, start RapidMiner draw the a training process like the following picture, and export the process as a xml file. Notice that you must use the first input node as the training data input, and connect the model to the first output node.
Second, export the a model apply process like the follwing picture. Also notice that the first input node will be the model and the second is the unlabelled data, labelled data output should connected to the first output node.
You can use any operators in your process, for example, you can add some operator to save your model in training process and load it from the predicting process. But you should keep in mind that you do need to use the convention used above, notice the input node and output node of the root process, both in order and data type.
3. Run "Rapid Learning", Mask Editor will start automatically. By default, two masks named "positive" and "negative" are added, but you can add more mask with any name. Name of the mask will be used as the class name in supervised learning. For unsupervised learning, all mask will be treated as the same. You can check "Enable drawing", and select the mask you want to to draw on the sequence.
Note: currently, there are some errors and warnings in the console output, just ignore it.
4. When editing mask is done, you should select the training process file and start training process. Note if your training set is large, especially the length of Z,T and C is long, the training process will spend some time.
5. After training, the trained model is stored in memory, you can apply the model to your other sequence with the same dimension in Z,T and C. First, you should select the predict process file, and then click "Predict" to start prediction.
The following videos and picture demonstrate an application in image stack.
- Supervised learning
- Unsupervised learning
- Fixed the errors and warnings when startup the plugin
- Provide some predefined process file for quick use.
- Write plugin to support learning content of a single image.