Entropy 2016, bảng A
It took me quite a lot of time to understand the question, though. I do not post the test here as I am not sure if I am permitted to do that. This post serves for my personal use. Even though the original questions are written in Vietnamese, I will be writing my solutions in English as I am not used to solve the problems in Vietnamese.
PHẦN I
 a. Let be a discrete random variable with possible values , we have: . Remember that .
 f. Venn diagram for 3 sets , and : (image source: Science HQ).
 e. This question is a little tricky:
 . How are you going to express this solution in linear constraints?
 Similar to a.
 We only have so we cannot nerf , so no linear constraints. This question and question d can be clever traps in this type of question. Luckily, they are not traps in this test.
 Similar to c.
 . A note here is that the constraint plays a role to legitimate .
 Well… no linear constraints?
 e. Again, another tricky question:
 No feasible solutions: Constraint: .
 No optimal solutions provided feasible solutions: Unbounded objective, e.g. Minimize: , Subject to: .
 Optimal solutions are feasible solutions: Minimize , Subject to: .
 “chỉ có nghiệm tối ưu”? Or should it be “chỉ có một nghiệm tối ưu”? If it is the latter then: Minimize , Subject to: .
 If the constraints are then yes we have only one feasible solutions. But if we want only two feasible solutions, e.g. and and . We cannot have two equality constraints like this . If we involve inequalities like then how many ‘s are there in the interval ? It is infinite.
 Infinite feasible solutions: Constraint: .
 IMHO, the answer should be e.
 Personally, I tend to prefer logistic regression over decision tree. However, as Machine Learning involves diverse approaches and Decision Tree is a name can be used to refer to a wide range of approaches, the answer remains indefinite. Reference: CrossValidated.
 To my knowledge, it is incorrect in mathematical view and in practice, the phenomenon should be questioned futher.
PHẦN II
 I cannot understand what does it mean by “Hãy tính giá trị Entropy của tập huấn luyện trên theo phân lớp dương”? What? If we only consider the positive cases, there is no need for us to calculate the provided formula as they will be all zero! I will ignore the part “theo phân lớp dương” then.
 .
 .
 . It is easy to see that many component entropies of this entropy equal to zero due to the fact that .
 .
 Information gain = Entropy before splitting – Entropy after splitting.
 .
 .
 I’m not sure about the term “Cost” and “Bias” mentioned in this question. However, I will assume that the “Cost” is the one in “computational cost” and the “Bias” refers to the distance between the final hypothesis and the target function. My answer: C, D, A, B.
 I do not know how to address this question.
 I am not sure if I understand the question correctly. However, if I do, then:
PHẦN III
 I cannot understand what is the role of in this question?
 Because is a covariance matrix, it is inherently symmetric. We have:
So, must be an unit eigenvector of for to reach a constrained extremum. It is easy to see that we need to choose the unit eigenvector with largest eigenvalue to maximize .
 The same goes for . A note here is that as is symmetric, it is diagonalizable, hence it has linearly independent eigenvectors.
 Because is a covariance matrix, it is inherently symmetric. We have:
 I cannot the understand why the test asks that. However:

.

.
