Make sure you check the syllabus for the due date. Please use the notations adopted in class, even if the problem is stated in the book using a different notation.
UCI datasets: AGR, BAL, BAND, CAR, CMC, CRX, MONK, NUR, TIC, VOTE. (These are archives which I downloaded a while ago. For more details and more datasets visit http://archive.ics.uci.edu/ml/)
The relevant files in each folder are only two:
* .config : # of datapoints, number of discrete attributes, # of continuous (numeric) attributes. For the discrete ones, the possible values are provided, in order, one line for each attribute. The next line in the config file is the number of classes and their labels.
* .data: following the .config convention the datapoints are listed, last column are the class labels.
You should write a reader that given the .config file, reads the data from the .data file.
A. Implement the Adaboost algorithm with decision stumps as weak learners, as
described in class. Run it on the UCI data and report the results. The datasets CRX, VOTE are required
B. Run the algorithm for each of the required datasets using c% of the datapoints chosen randomly for training, for several c values: 5, 10, 15, 20, 30, 50, 80. Test on a fixed fold (not used for training). For statistical significance, you can repeat the experiment with different randomly selected data or you can use cross-validation.
C(extra credit) Run boosting
on the other datasets. Some of them are multiclass, so you will have to
have a muticlass-boosting implementation. The easiest "multiclass" is
to run binary boosting one-vs-the-others separately for each class.
Run your code from PB1 to perform active learning. Specifically:
- start with a training set of about 5% of the data (selected randomly)
- iterate M episodes: train the Adaboost for T rounds; from the datapoints not in the training set, select the 2% ones that are closest to the separation surface (boosting score F(x) closest to ) and add these to the training set (with labels). Repeat until the size of the training set reaches 50% of the data.
How is the performance improving with the training set increase? Compare the performance of the Adaboost algorithm on the c% randomly selected training set with c% actively-built training set for several values of c : 5, 10, 15, 20, 30, 50. Perhaps you can obtain results like these
Run Boosting with ECOC functions on the 20Newsgroup dataset with unigram extracted features by Cheng. Also, as an extra credit problem, you can try the Letter Recognition Data Set.
ECOC are a better muticlass approach than one-vs-the-rest. Each ECOC
function partition the multiclass dataset into two labels; then
Boosting runs binary. Having K ECOC functions means having K
binary boosting models. On prediction, each of the K models predicts
0/1 and so the prediction is a "codeword" of lenght K 11000110101...
from which the actual class have to be identified.
Run gradient boosting with regression stumps/trees on 20Newsgroup dataset dataset.
What is the VC dimension for the class of hyphothesis of:
a) unions of two rectangles
d) multidimensional "sphere" given by f(x) = sign [(x-c)(x-c) -b] in the Euclidian space with m dimensions. Justify your answers !