For each attribute, For each value of the attribute, make a rule as follows: count how often each class appears find the most frequent class make the rule assign that class to this attribute-value Calculate the error rate of the rules Choose the rules wit

Gratis

0
0
18
11 months ago
Preview
Full text

  Success of method depends on the domain 03/04/06 4 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4)

  A weighted linear combination might do ♦ Instance-based: use a few prototypes ♦

  One branch for each value ♦

  Basic version ♦

  ✁

  1R: learns a 1-level decision tree ♦ I.e., rules that all test one particular attribute

  ✁

  Inferring rudimentary rules

  1 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 03/04/06

  ✁

  Use simple logical rules

  One attribute does all the work ♦ All attributes contribute equally & independently ♦

  ♦ Choose attribute with lowest error rate (assumes nominal attributes)

  There are many kinds of simple structure, eg: ♦

  ✁

  Simple algorithms often work very well!

  ✁

  03/04/06 3 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) Simplicity first

  Statistical modeling Constructing decision trees Constructing rules Linear models Instance-based learning Clustering

  Algorithms: The basic methods Inferring rudimentary rules

  2 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 03/04/06

  Slides for Chapter 4 of Data Mining by I. H. Witten and E. Frank

  Data Mining Practical Machine Learning Tools and Techniques

  Each branch assigns most frequent class ♦ Error rate: proportion of instances that don’t belong to the majority class of their corresponding branch

Evaluating the weather attributes

For each attribute, For each value of the attribute, make a rule as follows: count how often each class appears find the most frequent class make the rule assign that class to this attribute-value Calculate the error rate of the rules Choose the rules with the smallest error rate

  03/04/06 5 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) Pseudo-code for 1R

  Note: “missing” is treated as a separate attribute value 03/04/06 6 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4)

  3/ 6 True → No* 5/ 14 2/ 8 False → Yes Windy 1/ 7 Norm al → Yes 4/ 14 3/ 7 High → No Hum idity

  5/ 14 4/ 14 Total errors

  1/ 4 Cool → Yes 2/ 6 Mild → Yes 2/ 4 Hot → No* Temp 2/ 5 Rainy → Yes 0/ 4 Overcast → Yes 2/ 5 Sunny → No Outlook Errors Rules Attribute No True High Mild Rainy

  Yes False Norm al Hot Overcast Yes True High Mild Overcast Yes True Norm al Mild Sunny Yes False Norm al Mild Rainy Yes False Norm al Cool Sunny No False High Mild Sunny Yes True Norm al Cool Overcast No True Norm al Cool Rainy Yes False Norm al Cool Rainy Yes False High Mild Rainy Yes False High Hot Overcast No True High Hot Sunny No False High Hot Sunny Play Windy Hum idity Temp Outlook

  • indicates a tie

Yes | No | Yes Yes Yes | No No Yes | Yes Yes | No | Yes Yes | No

Yes | No | Yes Yes Yes | No No Yes | Yes Yes | No | Yes Yes | No

Yes No Yes Yes Yes | No No Yes Yes Yes | No Yes Yes No

  7 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 03/04/06

Resulting rule set:

  Robert C. Holt e, Com p ut er Science Dep ar tm ent , University of Ot tawa Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 1 1 03/04/06

  Interval given by minimum and maximum observed in training data ♦

  ✁

  ♦ For numeric attributes: test checks whether instance's value is inside an interval

  ♦ Each rule is a conjunction of tests, one for each attribute

  Discussion of 1R: Hyperpipes Another simple technique: build one rule for each class

  1R’s simple rules performed not much worse than much more complex decision trees

  Simplicity first pays off!

  ✁

  ✁

  Minimum number of instances was set to 6 after some experimentation ♦

  (using cross-validation so that results were representative of performance on future data) ♦

  1R was described in a paper by Holte (1993) ♦ Contains an experimental evaluation on 16 datasets

  For nominal attributes: test checks whether value is one of a subset of attribute values

  Statistical modeling

  Subset given by all possible values observed in training data ♦

  Class with most matching tests is predicted Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 1 2 03/04/06

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 1 0 03/04/06 Discussion of 1R

  ✁

  “Opposite” of 1R: use all the attributes

  ✁

  Two assumptions: Attributes are ♦ equally important

  ♦ statistically independent (given the class value)

  ✂

  I.e., knowing the value of one attribute says nothing about the value of another (if the class is known)

  ✁

  Independence assumption is never correct!

  ✁

  But … this scheme works well in practice

  ✁

  77.5 → Yes Tem perature 2/ 5 Rainy → Yes 0/ 4 Overcast → Yes 2/ 5 Sunny → No Outlook Errors Rules Attribute

  Dealing with numeric attributes

  Example: temperature from weather data 64 65 68 69 70 71 72 72 75 75 80 81 83 85

  90

  83 Overcast No True

  86

  75 Rainy Yes False

  80

  … … … … … Yes False

  ✁

  85

  This minimizes the total error

  Sort instances according to attribute’s values ♦ Place breakpoints where class changes (majority class) ♦

  Divide each attribute’s range into intervals ♦

  ✁

  Discretize numeric attributes

  ✁

  80 Sunny No False

  85 Sunny Play Windy Humidi ty Temperat ure Outl ook

  4/ 14 Total errors 2/ 4 > 77.5 → No* 3/ 10 ≤

  Simple solution: enforce minimum number of instances in majority class per interval

  95.5 → No 3/ 14 1/ 7 ≤ 82.5 → Yes Humidity 5/ 14

  0/ 1 > 95.5 → Yes 3/ 6 True → No* 5/ 14 2/ 8 False → Yes Windy 2/ 6 > 82.5 and ≤

  03/04/06 9 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) With overfitting avoidance

  64 65 68 69 70 71 72 72 75 75 80 81 83 85

  Example (with min = 3): 64 65 68 69 70 71 72 72 75 75 80 81 83 85

  ✁

  ✁

  8 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 03/04/06

  Also: time stamp attribute will have zero errors

  ✁

  One instance with an incorrect class label will probably produce a separate interval

  This procedure is very sensitive to noise ♦

  ✁

  The problem of overfitting

Ve ry Sim ple Clas si fication Rul es Perfo rm We ll on Mos t Com m only Use d Datas ets

Probabilities for weather data

No True High Mil d Rainy Yes False Nor m al Hot Ov ercast Yes True High Mil d Ov ercast Yes True Nor m al Mil d Sunny Yes False Nor m al Mil d Rainy Yes False Nor m al Cool Sunny No False High Mil d Sunny Yes True Nor m al Cool Ov ercast No True Nor m al Cool Rainy Yes False Nor m al Cool Rainy Yes False High Mil d Rainy Yes False High Hot Ov ercast No True High Hot Sunny No False High Hot Sunny Play Windy Hum idit y Tem p Out look

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 1 3 03/04/06

  ✤ ✟

  ✟ ✠

  ✡ ✗ ✘

  ✙ ✚

  ✛ ✛ ✜

  ✎ ✢

  ✘ ✣ ✣ ✔ ☞

  ✔ ✕ ✖ ✍

  ✕ ✦ ✧ ✕ ✠ ★ ✙

  ✠ ✡ ✥

  ✡ ✔ ✕ ✖

  ✘ ✠ ✕

  ✎ ✩

  ✛ ✛ ✚ ☞

  ✔ ✕ ✖

  ✍ ✤

  ✟ ✠

  ☞ ✌ ✍ ✎

  ? True High Cool Sunny Play Windy Hum idity Temp. Outlook Evidence E Probability of class “yes”

  ✟ ✠

  ☞ ☛ ✍ ✟ ✠

  Evidence E = instance ♦

  Event H = class value for instance

  ✁

  ✟ ✠

  ✡ ☛ ☞ ✌ ✍ ✎

  ✟ ✠

  ✡ ✌ ✏

  ✡ ✌ ✑

  ✪ ✫ ✪ ✙

  ☞ ☛ ✍ ✒

  ✟ ✠

  ✡ ✌ ✓

  ☞ ☛ ✍ ✟ ✠

  ✡ ☛ ✍ ✟

  ✠ ✡ ✌ ✍

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 1 7 03/04/06 Weather data example

  ✡ ☛ ✘ ✦

  ☛ ✪ ✬

  ✔ ✎

  ✺ ✻ ✺ ✼ ✽ ✾

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 1 8 03/04/06 The “zero-frequency problem”

  What if an attribute value doesn’t occur with every class value? (e.g. “Humidity = high” for class “yes”)

  ♦ Probability will be zero!

  ♦ A posteriori probability will also be zero! (No matter how likely the other values are!)

  Remedy: add 1 to the count for every attribute value-class combination (Laplace estimator)

  ✴ ✵

  ✶ ✷ ✸ ✹

  ✷ ✺ ✿

  ✠ ✡

  ❀ ❁ ✽ ❂ ❃

  ❄ ✾

  ❅ ✴

  ✵ ✶

  ✽ ❂ ❃ ❁

  ❆ ❄

  ✾ ❅

  ✌ ✍

  ✲ ✳ ✟

  5 /

  ✍ ✤

  ✭ ☞ ✔ ✕ ✖

  ✍ ✤

  ✟ ✠

  ✡ ✮ ✪ ✣

  ✫ ✔ ✎

  ✥ ✠ ✘ ✕

  ☞ ✔ ✕ ✖

  ✟ ✠

  ✤ ✰

  ✡ ✔ ✕ ✖

  ✍ ✟

  ✠ ✡ ✌ ✍

  ✎ ✯ ✰

  ✤ ✱ ✰

  ✤ ✱ ✰

  ✤ ✱ ✰

  Classification learning: what’s the probability of the class given an instance? ♦

  ✁

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 1 6 03/04/06 Naïve Bayes for classification

  5 No 9 /

  2

  3 Rainy

  4 Overcast

  3

  2 Sunny Out look

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 1 4 03/04/06 5/

  14

  14

  2 Cool 2/ 5 3/ 9 Rainy Mild Hot Cool Mild Hot Temperat ure 0/ 5 4/ 9 Overcast

  9 Yes Play 3/ 5 2/ 5

  3

  2 No 3/ 9 6/ 9

  3

  6 Yes True False True False Windy

  1/ 5 4/ 5

  1

  3/ 5 2/ 9 Sunny

  4

  ✠ ✡ ☛ ☞ ✌ ✍

  6 Yes True False True False Windy

  14

  5 No 9/

  14

  9 Yes Play 3/ 5 2/ 5

  3

  2 No 3/ 9 6/ 9

  3

  1/ 5 4/ 5

  3

  1

  4 No Yes No Yes No Yes 6/ 9 3/ 9

  6

  3 Norm al Hi gh Norm al Hi gh

  Humi dity 1/ 5 2/ 5 2/ 5

  1

  2

  2 3/ 9 4/ 9 2/ 9

  4 No Yes No Yes No Yes 6/ 9 3/ 9

  6

  3 Normal High Normal High

  ✁ ✂

  P(“yes”) = 0.0053 / (0.0053 + 0.0206) = 0.205 P(“no”) = 0.0206 / (0.0053 + 0.0 206) = 0.795 Probabilities for weather data

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 1 5 03/04/06 Bayes’s rule

  Probability of event H given evidence E: A priori probability of H :

  ✁

  Probability of event before evidence is seen A posteriori probability of H :

  ✁

  Probability of event after evidence is seen

  ✄ ☎

  Humidit y 1/ 5 2 / 5 2 / 5

  ✆ ✝

  ✞ ✁

  ✂ ✆ ☎ ✄ ✝ ✁

  ✂ ✄ ✝ ✁

  ✂ ✆ ✝ ✁

  ✂ ✄

  ✝ ✟

  × 5/ 14 = 0.02 06 Conversion int o a prob abilit y by norm aliz ation:

  × 3/ 5

  × 4/ 5

  × 1/ 5

  1

  2

  2 3/ 9 4/ 9 2/ 9

  3

  4

  2 Cool 2/ 5 3/ 9 Rainy Mild Hot Cool Mild Hot Temperat ure 0/ 5 4/ 9 Overcast

  3/ 5 2/ 9 Sunny

  2

  3 Rainy

  4 Overcast

  3

  2 Sunny Outl ook ? True High Cool Sunny Play Windy Humidit y Temp. Outlook

  A new day: Lik elihood of the two classes For “yes” = 2/ 9

  × 3/ 9

  × 3/ 9

  × 3 / 9

  × 9/ 14 = 0.0053 For “no” = 3/ 5

Naïve assumption: evidence splits into parts (i.e. attributes) that are independent

Thom as Bay e s Bo rn : 1 70 2 in Lon d on , En gland Die d : 1 76 1 in Tu n brid ge Wells , Ken t, En glan d

Result: probabilities will never be zero! (also: stabilizes probability estimates)

Sunny Overcast Rainy

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 1 9 03/04/06

  ☛ ✖

  2 Sunny Outl ook

  ✕ ✓ ✼ ❂ ✹ ✞ ❂ ✵ ★

  ✼ ✸ ✵ ❂ ✾

  ✩ ✩ ❁ ✽ ❂ ❃

  ✔ ✾

  ✪ ✙

  ✗ ✩

  4 Overcast

  ✫ ✬ ✭ ✭

  ✫ ✮ ✯ ✰

  ✤ ✥

  ✱ ✭

  ✲ ✥ ✤

  3

  3 Rainy

  ✪ ❅ ☎ ✝ ❅

  2

  2/ 5 3/ 9 Rainy Temp erature 0/ 5 4/ 9 Overcast 3/ 5 2/ 9 Sunny

  = 73 72 , … 6 9, 70, 64 , 68,

  = 6.2 µ

  72 ,80, 6 5,71, σ

  µ = 75 85, …

  σ = 7.9

  70 , 75, 6 5, 70, Humidit y

  µ = 79 80 , …

  σ = 1 0.2

  90, 9 1, 70, 8 5, No Yes No Yes No Yes

  µ = 8 6 95, …

  σ = 9.7

  6 Yes True False True False Windy

  ✾ ❅

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 2 3 03/04/06 Classifying a new day

  Modified probability estimates

  But: this doesn’t change calculation of a posteriori probabilities because ε cancels out

  ✕ ✓ ✼ ✔ ✻ ✼

  ❆ ❇ ❈

  ❅ ❄ ✾

  ✶ ★ ❄ ✑ ❄

  ✴ ✵

  ✷ ❃

  ❀ ❁ ❂

  ✾ ✿ ✹

  ✹ ✺

  ✻ ✼ ✻ ✷ ✽

  ✸ ✹ ✺

  ✶ ✷

  ✴ ✵

  ✳

  ✳

  ✳

  Relationship between probability and density:

  ✳

  Probability densities

  P(“yes”) = 0.000036 / (0.000036 + 0. 000108) = 25% P(“no”) = 0.000108 / (0.000036 + 0. 000108) = 75% Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 2 4 03/04/06

  × 9/ 14 = 0.000036 Likelihood of “no” = 3/ 5 × 0.0221 × 0.0381 × 3/ 5 × 5/ 14 = 0.000108

  × 3/ 9

  × 0.0221

  × 0.0340

  66 Sunny Play Windy Hum idity Temp. Out look Likelihood of “yes” = 2/ 9

  90

  ? true

  Missing values during training are not included in calculation of mean and standard deviation

  ✳

  A new day:

  3

  2 No 3/ 9 6/ 9

  3

  ✆ ✁

  3/ 9 ×

  Likelihood of “yes” = 3/ 9 ×

  ? True High Cool ? Play Windy Hum idit y Temp. Outlook

  Missing values Training: instance is not included in frequency count for attribute value-class combination Classification: attribute will be omitted from calculation Example:

  20 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 03/04/06

  ✂

  ✆ ✁

  ✞ ✡

  ✁ ✂

  ✂ ☎

  ✆ ✁

  ✞ ✠

  ✁ ✂

  ✂ ✝

  ✞ ✟

  9 Yes Play 3/ 5 2/ 5

  ✁ ✂

  ✆ ✁ ✂

  ✁ ✂ ✄ ☎

  ✁ ✂ ☎

  ✄ ☎ ✆

  ✝ ✁ ✂

  ✆ ✁ ✂

  ✁ ✂ ✄ ☎

  Weights don’t need to be equal (but they must sum to 1)

  ✁

  Example: attribute outlook for class yes

  ✁

  In some cases adding a constant different from 1 might be more appropriate

  ✁

  3/ 9 ×

  9/ 14 = 0.0238 Likelihood of “no” = 1/ 5 × 4/ 5 × 3/ 5 × 5/ 14 = 0.0343 P(“yes”) = 0.0238 / (0.0238 + 0.0343) = 41% P(“no”) = 0.0343 / (0.0238 + 0.0343) = 59%

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 2 1 03/04/06 Numeric attributes

  Usual assumption: attributes have a normal or Gaussian probability distribution (given the class) The probability density function for the normal distribution is defined by two parameters:

  14

  5 No 9/

  14

  5/

  ✧

  03/04/06 22 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) Statistics for weather data

  ✥ ✦ ✤

  ✣ ✤

  ✛ ✜ ✚ ✢

  ✙ ✚

  ✖ ✗ ✘

  ✾ ☛

  ✑ ✔

  ✠ ✕ ✓

  ✒ ✂ ✔

  ✑ ✍

  ✏ ✓

  ✁

  Sample mean µ

  ✁

  Standard deviation σ

  ✁

  ✂ ✾ ☛

  ☞ ✌ ✍

  ✎ ✟

  ✏ ✑

  ✍ ✂ ✾

  ☛ ☞

  ✒ ☛

  ✌ ✍

  ✎ ✟

Example density value:

Then the density function f(x) is

Exact relationship:

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 2 5 03/04/06

  ✶ ✓ ✔ ✕ ✖ ✗ ✘ ✗

  ✲ ✪ ✤

  ❀ ✥

  ✍ ✧ ✍ ★

  ✥ ✲

  ✣ ✤ ❀

  ✛ ✜ ✢ ✰ ✾ ✿

  ✔ ✕ ✖ ✗

  ✕ ✕ ✙ ✚

  ✴ ✵

  ✬ ✭

  ✭ ✮ ✯

  ✿ ✬

  ✪ ✭ ✫

  ✥ ★ ✩

  ✲ ✥ ✦ ✤

  ❀ ✥

  ✲ ✮ ✦ ✧ ✍ ★

  ❀ ✥

  ✥ ★ ✩

  ✺ ✯

  ✛ ✜ ✢ ✾

  ✳

  Which attribute to select?

  Which attribute to select? Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 3 0 03/04/06

  Stop if all instances have the same class Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 2 9 03/04/06

  ✳

  Finally: repeat recursively for each branch, using only instances that reach the branch

  Then: split instances into subsets One for each branch extending from the node ♦

  First: select attribute for root node Create branch for each possible attribute value ♦

  Strategy: top down Recursive divide-and-conquer fashion ♦

  Constructing decision trees

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 2 7 03/04/06 Naïve Bayes: discussion

  Note also: many numeric attributes are not normally distributed ( → kernel density estimators) Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 2 8 03/04/06

  ✧

  However: adding too many redundant attributes will cause problems (e.g. identical attributes)

  ✧

  Why? Because classification doesn’t require accurate probability estimates as long as maximum probability is assigned to correct class

  ✧

  Naïve Bayes works surprisingly well (even if independence assumption is clearly violated)

  ✧

  ✿ ✣ ✤

  ✔ ✕ ✖ ✗

  Multinomial naïve Bayes I

  1 ,P

  ✡ ☛

  ✞ ✟ ✠

  ✁ ✂ ✄ ☎ ✆ ✝

  ✳

  Probability of observing document E given class H (based on multinomial distribution):

  ✳

  : probability of obtaining word i when sampling from documents in class H

  2 , ... , P k

  P

  ✍ ✎

  ✳

  : number of times word i occurs in document

  2 , ... , n k

  1 ,n

  n

  ✳

  Version of naïve Bayes used for document classification using bag of words model

  ✳

  ☞ ✌

  ☞ ✏

  ✕ ✕ ✙ ✚

  Probability of observing document: Suppose there is another class H' that has Pr[yellow | H'] = 75% and Pr[yellow | H'] = 25%:

  ✔ ✕ ✖ ✗ ✘ ✗

  ✶ ✓

  ✴ ✵

  ✳

  Factorials don't actually need to be computed

  ✳

  Need to take prior probability of class into account to make final classification

  ✳

  ✳

  ✑ ✒

  Suppose E is the document “blue yellow blue”

  ✳

  Suppose Pr[yellow | H] = 75% and Pr[blue | H] = 25%

  ✳

  Suppose dictionary has two words, yellow and blue

  ✳

  Multinomial naïve Bayes II

  26 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 03/04/06

  ☞ ✠

Ignores probability of generating a document of the right length (prob. assumed constant for each class)

Underflows can be prevented by using logarithms

Formula for computing the entropy:

  ✎ ✒ ✚ ✛ ✑

  ✍ ✎ ✂ ✒

  ☞ ✌

  ✪ ☛

  ✥ ✦ ✧ ★ ✩ ☛ ✖

  ✔ ✤

  ✍ ✣ ✎ ✏ ✚ ✛ ✓

  ✜ ✏ ✚ ✛ ✢

  ✍ ✣ ✎ ✒ ✚ ✛ ✓

  ✜ ✒ ✚ ✛ ✢

  ✏ ✚ ✛ ✓ ✔

  ✖ ✗ ✍ ✘ ✙

  ✬ ✂ ✫

  ✒ ✝ ✓ ✔ ✕ ☞

  ✎ ✂ ✏ ✑

  ✌ ✍

  ☛ ☞

  ✤ ✩ ☛ ✖ ✪

  ✎ ✤ ✓ ✔

  ✤ ✢ ✍ ✣

  ✎ ★ ✓ ✜

  ★ ✢ ✍ ✣

  ✤ ✓ ✔ ✜

  ✎ ★ ✑

  ✑ ✏ ✝

  ✑ ✤ ✝

  ✑ ✤ ✝ ✓

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 3 4 03/04/06 Computing information gain

  Note: not all leaves need to be pure; sometimes identical instances have different classes ⇒ Splitting stops when data can’t be split any further

  ✳

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 3 6 03/04/06 Final decision tree

  Continuing to split gain( Tem perat ure ) = 0.571 bit s gain( Humidi ty ) = 0.971 bit s gain( Windy ) = 0.020 bit s

  = 0.940 – 0.693 = 0.247 bit s Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 3 5 03/04/06

  Windy ) = 0 .0 48 bit s gain( Out look ) = inf o([9,5]) – inf o([2,3],[4 ,0 ],[3,2])

  Hum idit y ) = 0 .1 52 bit s gain(

  Information gain for attributes from weather data: gain( Out look ) = 0 .2 47 bit s gain( Temp er at ur e ) = 0.02 9 bit s gain(

  ✧

  Information gain: information before splitting – information after splitting

  ✧

  ✪

  ✬ ✂ ✒

  ✥ ✮ ✦ ✒ ✩ ☛ ✖

  ✔ ✤

  ✥ ✦ ✧ ★

  ✡ ✤

  ✭ ✎ ✛ ✚ ★ ✫ ✓

  ✡ ✤

  ✭ ✎ ✫ ✚ ★ ✫ ✓

  ✥ ✦ ✧ ★

  ✡ ✤

  ✔ ✎ ✛ ✚ ★ ✫ ✓

  ✑ ✏ ✝ ✓

  ✔ ✕ ☞ ✖ ✗ ✍ ✘ ✙

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 3 1 03/04/06

  Criterion for attribute selection

  ✧

  ✆ ✸ ☎

  ✕ ✙ ✡ ☎

  ☎ ✆

  ✩ ✸

  ✠ ❃

  ✭ ✭ ✭ ✟ ☎

  ✞ ✝

  ✝ ☎

  ☎ ✆

  ✂ ✙ ✄ ✘ ❂

  ✗ ✁

  ♦ Entropy gives the information required in bits (can involve fractions of bits!)

  ✙ ✡ ☎ ✞

  Given a probability distribution, the info required to predict an event is the distribution’s entropy

  Measure information in bits

  ✧

  03/04/06 Computing information

  Strategy: choose attribute that gives greatest information gain Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 3 2

  ✳

  ♦ Information gain increases with the average purity of the subsets

  ✳

  Heuristic: choose the attribute that produces the “purest” nodes

  Which is the best attribute? ♦ Want to get the smallest tree ♦

  ✳

  ✞ ✕

  ✭ ✭ ✭ ✸

  ☞ ✌

  ✌ ✍

  ✖ ✪ ☛

  ✥ ✦ ✧ ★ ✩ ☛

  ✔ ✤

  ✍ ✣ ✎ ✒ ✚ ✛ ✓

  ✜ ✒ ✚ ✛ ✢

  ✍ ✣ ✎ ✏ ✚ ✛ ✓

  ✔ ✜ ✏ ✚ ✛ ✢

  ✑ ✒ ✚ ✛ ✓

  ✕ ☞ ✖ ✗ ✍ ✘ ✙ ✎ ✏ ✚ ✛

  ✒ ✝ ✓ ✔

  ✎ ✂ ✏ ✑

  ☛ ☞

  ☎ ✠

  Expected information for attribute: Note: this is normally undefined.

  ✧

  Outlook = Rainy :

  ✧

  Outlook = Overcast :

  ✧

  Outlook = Sunny :

  ✧

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 3 3 03/04/06 Example: attribute Outlook

  ✠

  ✕ ✙ ✡ ☎

  ✍ ✎ ✂ ✫

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 3 7 03/04/06

  ⇒ This may result in overfitting (selection of an attribute that is non-optimal for prediction)

  ☛ ☞

  (namely 0.940 bits)

  Entropy of split: ⇒ Information gain is maximal for ID code

  ✧

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 4 1 03/04/06 Tree stump for ID code attribute

  ID code No True High Mild Rainy Yes False Norm al Hot Overcast Yes True High Mild Overcast Yes True Norm al Mild Sunny Yes False Norm al Mild Rainy Yes False Norm al Cool Sunny No False High Mild Sunny Yes True Norm al Cool Overcast No True Norm al Cool Rainy Yes False Norm al Cool Rainy Yes False High Mild Rainy Yes False High Hot Overcast No True High Hot Sunny No False High Hot Sunny Play Windy Hum idit y Temp. Outlook

  I H G F E D C B A

  N M L K J

  Another problem: fragmentation Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 4 0 03/04/06

  ✳

  Subsets are more likely to be pure if there is a large number of values ⇒ Information gain is biased towards choosing attributes with a large number of values

  ✎ ✚ ✛ ✜ ✍

  ✳

  Problematic: attributes with a large number of values (extreme case: ID code)

  ✳

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 3 9 03/04/06 Highly-branching attributes

  ✘ ✙ ✗

  ✒ ✓ ✗

  ✕ ✖ ✗ ✏ ✑

  ✑ ✒ ✓

  ✍ ✕ ✏

  ✒ ✓ ✔

  ✌ ✍

  ✢ ✕

  Wishlist for a purity measure

  ✎ ✂ ✤ ✑

  Intrinsic information: entropy of distribution of instances into branches (i.e. how much info do we need to tell which branch an instance belongs to)

  ✳

  It corrects the information gain by taking the intrinsic information of a split into account

  Gain ratio takes number and size of branches into account when choosing an attribute ♦

  ✳

  Gain ratio : a modification of the information gain that reduces its bias

  ✳

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 4 2 03/04/06 Gain ratio

  ✤ ✦ ✣ ✧ ✝

  ★ ✝ ✓ ✔

  ✤ ✥

  ✓ ✔

  ✣ ✒

  ✥ ✥ ✥ ✭

  ★ ✝ ✓ ✭

  ✎ ✂ ✤ ✑

  ✤ ✥

  ✣ ✒

  ★ ✝ ✓ ✭

  ✎ ✂ ✤ ✑

  ✤ ✥

  ✣ ✒

  ✔ ✏ ✑

  ✎ ✍

  ✑ ✒ ✓

  ✧ ✝ ✓

  The multistage property:

  ✳

  Properties of the entropy

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 3 8 03/04/06

  ✝ ✓

  ✑ ✫

  ✎ ✂ ✒

  ✡ ✄ ☎ ✆ ✝ ✞ ✁ ☎

  ✧ ✚ ✦ ✓

  ✭ ✎

  ✏ ✑

  ✍ ✎ ✏

  ✔ ✕ ✁ ✪ ✂ ✗ ✕ ✎ ✂

  ✫ ✝ ✓

  ✒ ✑

  ✏ ✑

  ✕ ✁ ✪ ✂ ✗ ✕ ✎ ✂

  ✳

  When impurity is maximal (i.e. all classes equally likely), measure should be maximal ♦ Measure should obey multistage property (i.e. decisions can be made in several stages):

  When node is pure, measure should be zero ♦

  Properties we require from a purity measure: ♦

  ✳

  ✳

  Simplification of computation:

  ✳

  ✗ ✁

  ☞ ✌

  ✜ ✫ ✚ ✦ ✢ ✎ ✫ ✚ ✦ ✓

  ✜ ✒ ✚ ✦ ✢ ✎ ✒ ✚ ✦ ✓

  ✔ ✜ ✏ ✚ ✦ ✢ ✎ ✏ ✚ ✦ ✓

  ✑ ✫ ✝ ✓

  ✑ ✒

  ❃ ☛ ✌ ✎ ✂ ✏

  ☛ ✠ ✡ ☛

  ❂ ✠ ✠ ✡ ☛ ✟

  ✁ ✂ ✙ ✄ ✘

  ❀ ✗

  ✵ ❃

  ✟ ✽

  ✽ ❂

  ✵ ❃

  ✟ ✽

  ☎ ✟

  ✂ ✙ ✄ ✘ ❂

  ✩ ✗ ✁

  ✵ ❃

  ✟ ✟

  ☎ ✟

  ✂ ✙ ✄ ✘ ❂

Note: instead of maximizing info gain we could just minimize information

Entropy is the only function that satisfies all three properties!

Weather data with ID code

  Computing the gain ratio Gain ratios for weather data

  ✳

  Example: intrinsic information for ID code Outlook Temperature

  ☛ ✌ ✎ ✂ ★ ★ ★ ✝ ✓ ★ ✫ ✎ ★ ★ ✫ ✢ ✎ ★ ★ ✫ ✓ ✓ ✒ ✤ ✩ ☛ ☞ ✍ ✡ ✚ ✡ ✍ ✣ ✚ ✧ ✖ ✪

  ✔ ✔ ✑ ✑ ✥ ✥ ✥ ✬ ✜ ✥

  Info: 0.693 Info: 0.911

  ✳ Gain: 0.940- 0.693 0.247 Gain: 0.940- 0.911 0.029

  Value of attribute decreases as intrinsic Split info: info([5,4,5]) 1.577 Split info: info([4,6,4]) 1.557 Gain ratio: 0.247/ 1.577 0.157 Gain ratio: 0.029/ 1.557 0.019 information gets larger

  ✳

  Definition of gain ratio:

Info: 0.788 Info: 0.892

  ✎ ✓ ✂ ✄ ☎ ✆ ✬ ❇ ✝ ✝ ☛ ☞ ❈ ✞ ✝ ✟ ✰ ☛ ✖ ☛ ✧ ✧ ✣ ✦ ✧ Gain: 0.940- 0.788 0.152 Gain: 0.940- 0.892 0.048

  ✣ ✁ ☞ ✗ ✁ ✍ ✆ ✁ ✞ ☎ ✔ ☎ ✆ ✠ ✡ ☎ ✆ ☛ ☎ ☞ ☎ ✆ ✍ ✎ ✬ ❇ ✝ ✝ ☛ ☞ ❈ ✞ ✝ ✟ ✰

  ✁ ✌

  Split info: info([7,7]) 1.000 Split info: info([8,6]) 0.985 Gain ratio: 0.152/ 1 0.152 Gain ratio: 0.048/ 0.985 0.049

  ✳

Example:

  ✥ ✪ ✫ ✥ ✗ ☎ ✠ ✑ ✁ ✑ ❂ ✓ ✔ ✖ ❃ ☛ ✬ ✺ ✯ ✙

  ✡ ✏ ✂ ✏ ✙ ✕ ✙ ✗ ✲ ✩ ✩

  ✯ ✘ ✥ ✮ ✗ ☎ ✠ ✭ ☛

  ✒ ✲

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 4 3 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 4 4 03/04/06 03/04/06

  More on the gain ratio Discussion

  ✳ ✳

  Top-down induction of decision trees: ID3, “Outlook” still comes out top

  ✳ algorithm developed by Ross Quinlan

  However: “ID code” has greater gain ratio ♦

  Gain ratio just one modification of this basic ♦

  Standard fix: ad hoc test to prevent splitting on that algorithm type of attribute

  ♦

  ✳

  ⇒ C4.5: deals with numeric attributes, missing Problem with gain ratio: it may overcompensate values, noisy data

  ✳

  ♦ May choose an attribute just because its intrinsic Similar approach: CART

  ✳

  information is very low There are many other attribute selection ♦ Standard fix: only consider attributes with greater criteria! than average information gain (But little difference in accuracy of result)

  03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 4 5 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 4 6 Covering algorithms Example: generating a rule

  ✳

  Convert decision tree into a rule set ♦

  Straightforward, but rule set overly complex ♦

  More effective conversions are not trivial

  ✳

  Instead, can generate rule set directly

If true If x > 1.2 and y > 2.6

  ♦ for each class in turn find rule set that covers then class = a then class = a all instances in it

If x > 1.2

  (excluding instances not in the class)

  ✳

  then class = a Called a covering approach:

  ✳

  ♦ at each stage a rule is identified that “covers” Possible rule set for class “b”: some of the instances

If x ≤ 1.2 then class = b If x > 1.2 and y ≤ 2.6 then class = b

  ✳

  Could add more rules, get “perfect” rule set 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 4 7 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 4 8 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 4 9 03/04/06

  Rules vs. trees Corresponding decision tree: (produces exactly the same predictions)

  ✧

  Rule we seek:

  ✧

  Possible tests: 4/12 Tear production rate = Normal 0/12 Tear production rate = Reduced 4/12 Astigmatism = yes 0/12 Astigmatism = no 1/12 Spectacle prescription = Hypermetrope 3/12 Spectacle prescription = Myope 1/8 Age = Presbyopic 1/8 Age = Pre-presbyopic 2/8 Age = Young If ? then recommendation = hard

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 5 3 03/04/06 Modified rule and resulting data

  ✧

  Rule with best test added:

  None Reduced Yes Hypermet rope Pre- presbyopic None Norm al Yes Hypermet rope Pre- presbyopic None Reduced Yes Myop e Presbyopi c Hard Norm al Yes Myop e Presbyopi c None Reduced Yes Hypermet rope Presbyopi c None Norm al Yes Hypermet rope Presbyopi c Hard Norm al Yes Myop e Pre- presbyopic None Reduced Yes Myop e Pre- presbyopic Norm al hard Yes Hypermet rope Young None Reduced Yes Hypermet rope Young Hard Norm al Yes Myop e Young None Reduced Yes Myop e Young Recomm ended lenses

  ✧

  Tear product ion rate Asti gmati sm Sp ectacle prescri ption

  Age

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 5 4 03/04/06 Further refinement

  ✧

  Current state:

  ✧

  Possible tests: 4/6 Tear production rate = Normal 0/6 Tear production rate = Reduced 1/6 Spectacle prescription = Hypermetrope 3/6 Spectacle prescription = Myope 1/4 Age = Presbyopic 1/4 Age = Pre-presbyopic 2/4 Age = Young If astigmatism = yes and ? then recommendation = hard

  ✧

  Example: contact lens data

  We are finished when p/t = 1 or the set of instances can’t be split any further 03/04/06 52 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4)

  Generates a rule by adding tests that maximize rule’s accuracy

  But: rule sets can be more perspicuous when decision trees suffer from replicated subtrees

  ✧

  Also: in multiclass situations, covering algorithm concentrates on one class at a time whereas decision tree learner takes all classes into account

  50 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 03/04/06

  Simple covering algorithm

  ✳

  ✳

  ✳

  Similar to situation in decision trees: problem of selecting an attribute to split on But: decision tree inducer maximizes overall purity

  ✳

  Each new test reduces rule’s coverage: Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 5 1 03/04/06

  Selecting a test

  ✳

  Goal: maximize accuracy ♦ t total number of instances covered by rule ♦ p positive examples of the class covered by rule

  ♦ t – p number of errors made by rule ⇒ Select test that maximizes the ratio p/t

If astigmatism = yes then recommendation = hard

Instances covered by modified rule:

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 5 5 03/04/06

  ✳

  We choose the one with greater coverage 1/3 Spectacle prescription = Hypermetrope 3/3 Spectacle prescription = Myope 1/2 Age = Presbyopic 1/2 Age = Pre-presbyopic If astigmatism = yes and tear production rate = normal and ? then recommendation = hard

  Tie between the first and the fourth test ♦

  ✳

  Possible tests:

  ✳

  Current state:

  Further refinement

  Modified rule and resulting data

  56 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 03/04/06

  Age

  Tear product ion rate Asti gmati sm Sp ectacle prescri ption

  None Norm al Yes Hypermet rope Pre- presbyopic Hard Norm al Yes Myop e Presbyopi c None Norm al Yes Hypermet rope Presbyopi c Hard Norm al Yes Myop e Pre- presbyopic Norm al hard Yes Hypermet rope Young Hard Norm al Yes Myop e Young Recomm ended lenses

  ✧

  Rule with best test added:

  ✧

If astigmatism = yes and tear production rate = normal then recommendation = hard

Instances covered by modified rule:

For each class C Initialize E to the instance set While E contains instances in class C Create a rule R with an empty left-hand side that predicts class C Until R is perfect (or there are no more attributes to use) do For each attribute A not mentioned in R, and each value v, Consider adding the condition A = v to the left-hand side of R Select A and v to maximize the accuracy p/t (break ties by choosing the condition with the largest p) Add A = v to R Remove the instances covered by R from E

If astigmatism = yes and tear production rate = normal and spectacle prescription = myope then recommendation = hard If age = young and astigmatism = yes and tear production rate = normal then recommendation = hard

  ✳

  Problems: overlapping rules, default rule required 03/04/06 60 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4)

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 5 7 03/04/06 The result

  ✳

  Subset covered by rule doesn’t need to be explored any further

  Difference to divide-and-conquer methods: ♦

  ✳

  Finally, “conquer” the remaining instances

  Then, separate out all the instances it covers ♦

  First, identify a useful rule ♦

  Methods like PRISM (for dealing with one class) are separate-and-conquer algorithms: ♦

  ✳

  Separate and conquer

  ✳

  Second rule for recommending “hard lenses”: (built from instances not covered by first rule)

  No order dependence implied

  Outer loop considers all classes separately ♦

  ✳

  But: order doesn’t matter because all rules predict the same class

  Subsequent rules are designed for rules that are not covered by previous rules ♦

  PRISM with outer loop removed generates a decision list for one class ♦

  ✳

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 5 9 03/04/06 Rules vs. decision lists

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 5 8 03/04/06 Pseudo-code for PRISM

  Final rule:

  These two rules cover all “hard lenses”: ♦

  ✳

  Process is repeated with other two classes

Weather data

Item sets for weather data

  ✳

  ✳

  … … … … Out look = Rainy Temperat ure = Mi ld Windy = False Play = Yes (2) Out look = Sunny

  Humi dity = High Windy = False (2) Out look = Sunny Humi dity = High (3)

  Temperat ure = Cool (4) Out look = Sunny Temperat ure = Hot Humi dity = High Play = No (2)

  Out look = Sunny Temperat ure = Hot Humi dity = High (2) Out look = Sunny

  Temperat ure = Hot (2) Out look = Sunny (5) Four- it em set s Three- item sets Two- item sets One- item sets

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 6 1 03/04/06

  In total: 12 one-item sets, 47 two-item sets, 39 three-item sets, 6 four-item sets and 0 five- item sets (with minimum support of two)

  ✳

  Once all item sets with minimum support have been generated, we can turn them into rules

  No True High Mild Rainy Yes False Normal Hot Overcast Yes True High Mild Overcast Yes True Normal Mild Sunny Yes False Normal Mild Rainy Yes False Normal Cool Sunny No False High Mild Sunny Yes True Normal Cool Overcast No True Normal Cool Rainy Yes False Normal Cool Rainy Yes False High Mild Rainy Yes False High Hot Overcast No True High Hot Sunny No False High Hot Sunny Play Windy Humidity Tem p Outlook

  Example:

  1 Association rule Conf. Sup.

   Play=Yes Humidity=Normal Windy=False

  ✧

  Rules with support > 1 and confidence = 100%:

  4

  2 100%

  3 100% Humidity=Normal 4Temperature=Cool

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 6 4 03/04/06

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 6 3 03/04/06

  4

  But: we can look for high support rules directly!

  Mining association rules

  ✳

  Naïve method for finding association rules: ♦

  Use separate-and-conquer method ♦ Treat every possible combination of attribute values as a separate class

  ✳

  Two problems: ♦

  Computational complexity ♦

  Resulting number of rules (which would have to be pruned on the basis of support and confidence)

  ✳

  62 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 03/04/06

  Goal: only rules that exceed pre-defined support ⇒ Do it by finding all item sets with the given minimum support and generating rules from them!

  Item sets

  ✳

  Support: number of instances correctly covered by association rule ♦

  The same as the number of instances covered by all tests in the rule (LHS and RHS!)

  ✳

  Item : one test/attribute-value pair

  ✳

  Item set : all items occurring in a rule

  ✳

   Play=Yes Outlook=Overcast

Humidity = Normal, Windy = False, Play = Yes (4) 4/4 4/6 4/6 4/7 4/8 4/9 4/12 If Humidity = Normal and Windy = False then Play = Yes If Humidity = Normal and Play = Yes then Windy = False If Windy = False and Play = Yes then Humidity = Normal If Humidity = Normal then Windy = False and Play = Yes If Windy = False then Humidity = Normal and Play = Yes If Play = Yes then Humidity = Normal and Windy = False If True then Humidity = Normal and Windy = False and Play = Yes

  58 ... ... ... ... 100%

  3

   Humidity=Normal Temperature=Cold Play=Yes

  4 100%

   Humidity=High Outlook=Sunny Temperature=Hot

  2

  100%

  In total: 3 rules with support four 5 with support three 50 with support two

  ✧

  03/04/06 66 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) Rules for weather data

  Seven (2 N

  ✳

  • 1) potential rules:

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 6 5 03/04/06 Generating rules from an item set

Temperature = Cool, Humidity = Normal, Windy = False, Play = Yes (2) Temperature = Cool, Windy = False

Humidity = Normal, Play = Yes

Temperature = Cool, Windy = False, Play = Yes ⇒ Humidity = Normal Temperature = Cool, Windy = False (2) Temperature = Cool, Humidity = Normal, Windy = False (2) Temperature = Cool, Windy = False, Play = Yes (2)

  • 1)

  ✳

  We are looking for all high-confidence rules ♦ Support of antecedent obtained from hash table ♦

  But: brute-force method is (2 N

  ✳

  Better way: building (c + 1)-consequent rules from c-consequent ones ♦

  Observation: (c + 1)-consequent rule can only hold if all corresponding c-consequent rules also hold

  ✳

  Resulting algorithm similar to procedure for large item sets Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 7 1 03/04/06

  Example

  ✳

  1-consequent rules:

  ✳

  Corresponding 2-consequent rule:

  Generating rules efficiently

  Final check of antecedent against hash table!

  03/04/06 72 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) Association rules: discussion

  ✳

  Above method makes one pass through the data for each different size item set ♦

  Other possibility: generate (k+2)-item sets just after (k+1)-item sets have been generated ♦

  Result: more (k+2)-item sets than necessary will be considered but less passes through the data ♦

  Makes sense if data too large for main memory

  ✳

  Practical issue: generating a certain number of rules (e.g. by incrementally reducing min. support)

  ✳

  (k –1)-item sets are stored in hash table 03/04/06 70 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4)

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 6 7 03/04/06

  Finding one-item sets easy

  Example rules from the same set

  ✳

  Item set:

  ✳

  Resulting rules (all with 100% confidence): due to the following “frequent” item sets:

  ⇒

  ⇒

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 6 8 03/04/06

  Generating item sets efficiently

  ✳

  How can we efficiently find all frequent item sets?

  ✳

  ✳

  ✳

  Idea: use one-item sets to generate two-item sets, two-item sets to generate three-item sets, … ♦

  If (A B) is frequent item set, then (A) and (B) have to be frequent item sets as well! ♦

  In general: if X is frequent k-item set, then all (k-1)- item subsets of X are also frequent ⇒ Compute k-item set by merging (k-1)-item sets

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 6 9 03/04/06 Example

  ✳

  Given: five three-item sets (A B C), (A B D), (A C D), (A C E), (B C D)

  ✳

  Lexicographically ordered!

  ✳

  Candidate four-item sets: (A B C D) OK because of (B C D) (A C D E) Not OK because of (C D E)

  ✳

  Final check by counting instances in dataset!

If Windy = False and Play = No then Outlook = Sunny and Humidity = High (2/2) If Outlook = Sunny and Windy = False and Play = No then Humidity = High (2/2) If Humidity = High and Windy = False and Play = No then Outlook = Sunny (2/2)

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 7 3 03/04/06

  Assume we have two classes

  ✘ ✕

  ✂ ☛

  ✑ ✄ ✄ ✄ ✄ ✓

  ✂ ✒

  ✏ ✑

  ✌ ✌ ✍ ✎ ✂

  )

  , + ∞

  Logit transformation maps [0,1] to (- ∞

  ✝

  Logistic regression replaces the target by this target

  ✝

  ✝

  ✔ ✕ ✖ ✗ ✘

  Builds a linear model for a transformed target variable

  ✝

  Linear models: logistic regression

  Problem: membership values are not in [0,1] range, so aren't proper probability estimates Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 7 7 03/04/06

  ✳

  For linear regression this is known as multi- response linear regression

  ✳

  ♦ Prediction: predict class corresponding to model with largest output value (membership value)

  ♦ Training: perform a regression for each class, setting the output to 1 for training instances that belong to class, and 0 for those that don’t

  Any regression technique can be used for classification

  ✳

  03/04/06 76 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) Classification

  ✙ ✡ ❂

  ✙ ✚

  ✬ ✞ ✰ ☞

  ✮ ✯

  ✣ ✾

  ✣ ✼

  ✽ ✽ ✽ ✹ ✺

  ✙ ✹

  ✙ ✼

  ✻ ✹ ✺

  ✷ ✸ ✹ ✺

  ✮ ✶ ✮

  ✴ ✵

  ✭ ✳

  ✯ ✱ ✱ ✱ ✲

  ✭ ✰

  ✪ ✫ ✬ ✭

  ✘ ✛

  ★ ✩

  ✝

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 7 8 03/04/06 Logit transformation

  ❃

  ✣ ✤ ✧

  ✜ ✜ ✜ ✜ ✢ ✘

  ✛ ✚

  ✚ ✘

  ✘ ✙

  ✦ ✔ ✕ ✖ ✗

  ✤ ✥ ✖

  ✘ ✣

  ✚ ✜ ✜ ✜ ✜ ✢

  ✥

  ✂ ✡

  Other issues

  Standard technique for numeric prediction ♦ Outcome is linear combination of attributes

  ✎ ✂

  ✄ ✄ ✄ ✖ ✁

  ✥ ✖

  ✥ ✂

  ✍ ✖ ✁

  ✍ ✂

  ✥ ✖ ✁

  ☞ ✁

  Predicted value for first training instance a (1) (assuming each instance is extended with a constant attribute with value 1)

  ✳

  Weights are calculated from the training data

  ✳

  ✳

  ✁ ✡

  Work most naturally with numeric attributes

  ✳

  03/04/06 Linear models: linear regression

  Other measures have been devised (e.g. lift) Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 7 4

  Example: milk occurs in almost every supermarket transaction ♦

  Confidence is not necessarily the best measure ♦

  ✳

  Instances are also called transactions

  ✳

  ♦ Data should be represented in sparse format

  ♦ Attributes represent items in a basket and most items are usually missing

  Standard ARFF format very inefficient for typical market basket data

  ✳

  ✎ ✁

  ✥ ✂

  ✥ ✬ ✍ ✰

  ✝

  ✥ ☛

  ✡ ✌

  ✍ ☎

  ✟ ✠ ✬ ✞ ✰

  ✌ ✍

  ☎ ✞

  ✝

  Can be done if there are more instances than attributes (roughly speaking)

  ✝

  Derive coefficients using standard matrix operations

  ✝ ✝

  Squared error:

  Choose k +1 coefficients to minimize the squared error on the training data

  ✖ ✁ ✍

  ✝

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 7 5 03/04/06 Minimizing the squared error

  ✂ ✆ ✬ ✍ ✰

  ✁ ✆

  ✥ ✎

  ✆ ✌

  ☞ ☎

  ✂ ✎ ✬ ✍ ✰

  ✖ ✁ ✎

  ✖ ✄ ✄ ✄

  ✂ ✥ ✬ ✍ ✰

  ✖ ✁ ✥

  ✂ ✍ ✬ ✍ ✰

Minimizing the absolute error is more difficult

Resulting model:

  Example logistic regression model Maximum likelihood

  ✝ ✝

  Model with w = 0.5 and w = 1: Aim: maximize probability of training data

  1 wrt parameters

  ✝

Can use logarithms of probabilities and maximize log-likelihood of model:

  ☎ ✶ ✁ ✾ ✞ ☎ ✪ ✬ ✶ ✁ ✾ ✶ ✁ ✾ ✶ ✁ ✾ ✴ ✞ ✄ ✫ ✟ ✫ ★ ✫

  ✝ ✠ ✡ ✩ ✭ ✭ ✭ ☛ ✆ ✆

  ✁ ✮ ✮ ✲ ✰ ✲ ✱ ✱ ✱ ✲ ✳ ✂

  ✶ ✁ ✾ ✶ ✁ ✾ ✶ ✁ ✾ ✶ ✁ ✾ ✟ ✪ ✫ ✬ ✴

  ★ ✝ ✠ ✡ ✩ ✭ ✭ ✭

  ✮ ✰ ✳ ✲ ✲ ✱ ✱ ✱ ✲

  (i) where the x are either 0 or 1

  ✝

  Weights w need to be chosen to maximize

  ✝

  i Parameters are found from training data log-likelihood (relatively simple method: using maximum likelihood iteratively re-weighted least squares )

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 7 9 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 8 0 03/04/06 03/04/06

  Multiple classes Pairwise classification

  ✝ ✝

  Can perform logistic regression Idea: build model for each pair of classes, independently for each class using only training data from those classes

  ✝

  (like multi-response linear regression) Problem? Have to solve k(k-1)/2

  ✝

  Problem: probability estimates for different classification problems for k-class problem

  ✝

  classes won't sum to one Turns out not to be a problem in many

  ✝

  Better: train coupled models by cases because training sets become small: maximizing likelihood over all classes ♦ Assume data evenly distributed, i.e. 2n/k per

  ✝

  learning problem for n instances in total Alternative that often works well in

  ♦ Suppose learning algorithm is linear in n practice: pairwise classification

  ♦ Then runtime of pairwise classification is proportional to (k(k-1)/2)×(2n/k) = (k-1)n

  03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 8 1 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 8 2 Linear models are hyperplanes Linear models: the perceptron

  ✝ ✳

  Don't actually need probability estimates if all we Decision boundary for two-class logistic want to do is classification regression is where probability equals 0.5:

  ✳ ✍ ✏ ✘ ✛ ✛ ★ ★

  ☞ ✎ ✎ ✚ ✎ ✩ ✪ Different approach: learn separating hyperplane ✌ ✑ ✑ ✑ ✙ ✜ ✢ ✣ ✤ ✦ ✦ ✑ ✦ ✑ ✙

  ✥ ✥ ✥ ✥ ✒ ✔ ✕ ✕ ✕ ✖ ✗ ✧ ✒ ✒ ✕ ✕ ✕ ✗ ✗ ✕

  ✓ ✓ ✳

Assumption: data is linearly separable

  ✩ ✑ ✑

  ✦ ✦ ✦ ✙ ✳ ✥ ✫ ✥ ✬ ✬ ✥ ✥ ✭ ✭

  ✕ ✕ ✕

  which occurs when Algorithm for learning separating hyperplane:

  ✝

  perceptron learning rule Thus logistic regression can only separate

  ✴ ✳ ✂ ✷ ✂ ✷ ✂ ✷ ✷ ✂

  ✵ ✁ ✁ ✁ ✁ ✶ ✶ ✏ ✏ ✒ ✒ ✸ ✸ ✸ ☛ ☛

  Hyperplane: data that can be separated by a hyperplane

  ✝

  where we again assume that there is a constant Multi-response has the same problem. attribute with value 1 (bias)

  ✳

Class 1 is assigned if:

  ✮ ✯ ✮ ✯ ✮ ✯ ✮ ✯ ✮ ✯ ✮ ✯ ✬ ✬ ✬ ✱ ✱ ✱

  ✜ ✜ ✜ ✜ ✜ ✜ ✦ ✦ ✑ ✦ ✑ ✰ ✦ ✦ ✑ ✦ ✑ If sum is greater than zero we predict the first

  ✫ ✬ ✬ ✭ ✭ ✫ ✬ ✬ ✭ ✭ ✕ ✕ ✕ ✕ ✕ ✕

  ✮ ✬ ✯ ✮ ✱ ✯ ✮ ✬ ✯ ✮ ✱ ✯ ✮ ✬ ✯ ✮ ✱ ✯ class, otherwise the second class ✛ ★ ✛ ★ ✛ ★ ✩

  ✜ ✑ ✜ ✜ ✑ ✲ ✦ ✦ ✦ ✦ ✦ ✦ ✰

  ✫ ✥ ✫ ✬ ✥ ✬ ✬ ✭ ✥ ✭ ✭ ✕ ✕ ✕

  03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 8 3 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 8 4

  The algorithm Perceptron as a neural network

Set all weights to zero Until all instances in the training data are classified correctly For each instance I in the training data If I is classified incorrectly by the perceptron If I belongs to the first class add it to the weight vector else subtract it from the weight vector

  ✳

  Output Why does this work? layer Consider situation where instance a pertaining to the first class has been added:

  ✁ ✁ ✁ ✁ ✷ ✷ ✷ ✷ ✷ ✷ ✷ ✷

  ✁ ✂ ✂ ✁ ✂ ✂ ✁ ✂ ✂ ✁ ✂ ✂ ✶ ✶ ✶ ✏ ✏ ✏ ✒ ✒ ✒ ☛ ☛ ☛

  ✸ ✸ ✸

  Input This means output for a has increased by:

  ✂ ✂ ✷ ✂ ✂ ✷ ✂ ✂ ✷ ✷ ✂ ✂ ✶ ✶ ✏ ✏ ✒ ✒ ✸ ✸ ✸ ☛ ☛

  layer This number is always positive, thus the hyperplane has moved into the correct direction (and we can show output decreases for instances of other class)

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 8 5 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 8 6 03/04/06 03/04/06

  Linear models: Winnow The algorithm

  ✝

  while some instances are misclassified Another mistake-driven algorithm for for each instance a in the training data classify a using the current weights finding a separating hyperplane if the predicted class is incorrect if a belongs to the first class

  ♦ Assumes binary data (i.e. attribute values are for each a that is 1, multiply w by alpha i i

   (if a is 0, leave w unchanged) either zero or one) i i otherwise

  ✝

   for each a that is 1, divide w by alpha Difference: multiplicative updates instead i i

   (if a is 0, leave w unchanged) i i of additive updates

  ✳

  Winnow is very effective in homing in on relevant ♦

  Weights are multiplied by a user-specified features (it is attribute efficient) parameter α> 1 (or its inverse)

  ✳ ✝

  Can also be used in an on-line setting in which Another difference: user-specified threshold new instances arrive continuously parameter

  θ (like the perceptron algorithm)

  ✝ ✁ ✂ ✷ ✁ ✂ ✷ ✁ ✂ ✷ ✷ ✁ ✂ ✆

  ✂ ✂ ✖ ✖ ✄ ✄ ✸ ✸ ✸ ☎ ☎

  ♦ Predict first class if

  03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 8 7 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 8 8 Balanced Winnow Instance-based learning

  ✳

  Winnow doesn't allow negative weights and this can be a drawback in some applications ✑

  ✳

  Distance function defines what’s learned Balanced Winnow maintains two weight vectors, ✑

  Most instance-based schemes use one for each class: while some instances are misclassified Euclidean distance :

  ✓ ✥ ✖ ✧ ✥ ✄ ✧ ✄ ✥ ✖ ✧ ✥ ✄ ✧ ✄ ✥ ✖ ✧ ✥ ✄ ✧ ✄

   for each instance a in the training data

  ☛ ☛ ☛ ☞ ☞ ✌ ☞ ☞ ✌ ☞ ☞

  ✡ ✡ ✡ ✖ ✖ ✄ ✄ ✍ ✍ ✍ ☎ ☎

  ✞ ✞ ✞

   classify a using the current weights if the predicted class is incorrect if a belongs to the first class (1) (2)

  • - a +

   for each a that is 1, multiply w by alpha and divide w by alpha and a : two instances with k attributes i i i

  • - +

  ✑

   (if a is 0, leave w and w unchanged) i i i

  Taking the square root is not required when otherwise

  • - +

   for each a that is 1, multiply w by alpha and divide w by alpha comparing distances i i i

  • + -

  ✑

   (if a is 0, leave w and w unchanged) i i i

  Other popular metric: city-block metric

  ✳ ✒

  Instance is classified as belonging to the first class Adds differences without squaring them

  ✠ ☛ ✠ ☛ ✠ ☛ ✟ ✟ ✦ ☞ ✌ ✟ ✟ ✦ ☞ ✌ ✌ ✟ ✟ ✦ ☞ ✎ ✏

  ✂ ✡ ✂ ✂ ✖ ✡ ✄ ✖ ☎ ✡ ☎ ☎ ✞ ✞ ✍ ✍ ✍ ✞

  (of two classes) if: 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 8 9 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4)

  90 Finding nearest neighbors efficiently Normalization and other issues

  ✳

  Simplest way of finding nearest neighbour: linear

  ✳

  Different attributes are measured on different scan of the data scales need to be normalized:

  ⇒ ♦

  Classification takes time proportional to the product

  ✄ ✝ ✁ ✞ ✄ ✆

  ☎ ☎ ✂

  of the number of instances in training and test sets

  ✁ ✁ ✝ ✟ ✠ ✄ ✝ ✞ ✄

  ☎ ✆ ☎ ✳

  Nearest-neighbor search can be done more efficiently using appropriate data structures v : the actual value of attribute i i

  ✳ ✳

  We will discuss two methods that represent Nominal attributes: distance either 0 or 1

  ✳ training data in a tree structure:

  Common policy for missing values: assumed to be maximally distant (given normalized attributes) kD-trees and ball trees

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 9 1 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4)

  92 03/04/06

  03/04/06 k D-tree example Using kD-trees: example

  03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 9 3 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 9 4 More on kD-trees Building trees incrementally

  ✑ ✑

  Complexity depends on depth of tree, given by Big advantage of instance-based learning: logarithm of number of nodes classifier can be updated incrementally

  ✑

  Amount of backtracking required depends on ♦

  Just add new training instance!

  ✑

  quality of tree (“square” vs. “skinny” nodes) Can we do the same with kD-trees?

  ✑ ✑

  How to build a good tree? Need to find good Heuristic strategy: split point and split direction

  ♦ Find leaf node containing new instance

  ♦ Split direction: direction with greatest variance

  ♦ Place instance into leaf if leaf is empty

  ♦ Split point: median value along that direction

  ♦ Otherwise, split leaf according to the longest

  ✑

  Using value closest to mean (rather than dimension (to preserve squareness)

  ✑

  median) can be better if data is skewed Tree should be re-built occasionally (i.e. if

  ✑

  Can apply this recursively depth grows to twice the optimum depth) 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 9 5 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4)

  96 Ball trees Ball tree example

  ✑

  Problem in kD-trees: corners

  ✑

  Observation: no need to make sure that regions don't overlap

  ✑

  Can use balls (hyperspheres) instead of hyperrectangles ♦

  A ball tree organizes the data into a tree of k- dimensional hyperspheres ♦

  Normally allows for a better fit to the data and thus more efficient search Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 9 7 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 9 8 03/04/06

  03/04/06 Using ball trees Building ball trees

  ✑ ✑

  Nearest-neighbor search is done using the Ball trees are built top down (like kD-trees)

  ✑

  same backtracking strategy as in kD-trees Don't have to continue until leaf balls

  ✑

  Ball can be ruled out from consideration if: contain just two points: can enforce distance from target to ball's center exceeds minimum occupancy (same in kD-trees)

  ✑

  ball's radius plus current upper bound Basic problem: splitting a ball into two

  ✑

  Simple (linear-time) split selection strategy: ♦

  Choose point farthest from ball's center ♦

  Choose second point farthest from first one ♦

  Assign each point to these two points ♦

  Compute cluster centers and radii based on the two subsets to get two balls 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 9 9 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 1 00

  Discussion of nearest-neighbor learning More discussion

  ✑ ✑

  Instead of storing all training instances, Often very accurate

  ✑

  compress them into regions Assumes all attributes are equally important

  ✑ ✒

  Remedy: attribute selection or weights Example: hyperpipes (from discussion of 1R)

  ✑ ✑

  Possible remedies against noisy instances:

  ✒ Another simple technique (Voting Feature

  Take a majority vote over the k nearest neighbors

  ✒

  Intervals): Removing noisy instances from dataset (difficult!)

  ✑

  ♦ Construct intervals for each attribute Statisticians have used k-NN since early 1950s

  ✒

  Discretize numeric attributes If n and k/n 0, error approaches minimum → ∞ →

  ✑

  Treat each value of a nominal attribute as an “interval” k D-trees become inefficient when number of

  ♦ Count number of times class occurs in interval attributes is too large (approximately > 10)

  ✑

  ♦ Ball trees (which are instances of metric trees) Prediction is generated by letting intervals vote work well in higher-dimensional spaces

  (those that contain the test instance) 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 1 0 1 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 1 02 Clustering The k-means algorithm

  ✑

  Clustering techniques apply when there is no class to be predicted To cluster data into k groups:

  ✑

  (k is predefined) Aim: divide instances into “natural” groups

  ✑

  1. Choose k cluster centers As we've seen clusters can be:

  ♦ e.g. at random ♦ disjoint vs. overlapping

  2. Assign instances to clusters ♦ deterministic vs. probabilistic

  ♦ based on distance to cluster centers

  ♦ flat vs. hierarchical

  3. Compute centroids of clusters

  ✑

  We'll look at a classic clustering algorithm

  4. Go to step 1 called k-means

  ♦ until convergence

  ♦ k-means clusters are disjoint, deterministic, and flat

  Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 1 0 3 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 1 0 4 03/04/06 03/04/06

  Discussion Faster distance calculations

  ✳ ✑

  Algorithm minimizes squared distance to Can we use kD-trees or ball trees to speed cluster centers

  ✳ up the process? Yes:

  Result can vary significantly ♦

  First, build tree, which remains static, for all ♦ based on initial choice of seeds

  ✳ the data points

  Can get trapped in local minimum ♦

  At each node, store number of instances and ♦

  Example: initial sum of all instances cluster centres

  ♦ In each iteration, descend tree and find out instances which cluster each node belongs to Can stop descending as soon as we find out that a

  ✳

  To increase chance of finding global optimum: node belongs entirely to a particular cluster restart with different random seeds Use statistics stored at the nodes to compute new

  ✳

  cluster centers Can we applied recursively with k = 2 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 1 0 5 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 1 06

  Example Comments on basic methods

  ✳

  Bayes’ rule stems from his “Essay towards solving a problem in the doctrine of chances” (1763) ♦

  Difficult bit in general: estimating prior probabilities (easy in the case of naïve Bayes)

  ✳

  Extension of naïve Bayes: Bayesian networks (which we'll discuss later)

  ✳

  Algorithm for association rules is called APRIORI

  ✳

  Minsky and Papert (1969) showed that linear classifiers have limitations, e.g. can’t learn XOR ♦

  But: combinations of them can ( multi-layer neural → nets, which we'll discuss later)

  03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 1 0 7 03/04/06 Data Mining: Practical Machine Learning Tools and Techniques (Chapter 4) 1 0 8

Dokumen baru