Question

Gi 1D: 11-10,22 = 8,13-6,24 clustering to obtain k 2 clusters by hand. Specifically, 4,15-3,16 2, perform k-means ven the following six iteims in 1. Start from initial cluster centers c0,2 9. Show your steps for all iterations: (1 the cluster assignments i.... ys: (2) the updated cluster centers at the end of that iteration; (3) the energy at the end of that iteration 2. Repeat the above but start from initial cluster centers c 3. Which k-means solution is better? Why? 8, 2 9.

0 0
Add a comment Improve this question Transcribed image text
Answer #1

卆。 -3 e3 since same mean u must sinp () updated cluster Centrs ar C3 and a ん . 냐 CL sinu e ae getli ng samu mean semP

The second k means solution is better as it has cluster center values close to each other which help to calculate the mean cluster faster.

Add a comment
Know the answer?
Add Answer to:
Gi 1D: 11-10,22 = 8,13-6,24 clustering to obtain k 2 clusters by hand. Specifically, 4,15-3,16 2,...
Your Answer:

Post as a guest

Your Name:

What's your source?

Earn Coins

Coins can be redeemed for fabulous gifts.

Not the answer you're looking for? Ask your own homework help question. Our experts will answer your question WITHIN MINUTES for Free.
Similar Homework Help Questions
  • Question 4 1 pts Which of the following reasons is not the reason why the K-means...

    Question 4 1 pts Which of the following reasons is not the reason why the K-means algorithm will likely end up with sub-optimal clustering? (Select all that apply.) Bad choices for the initial cluster centers. Choosing a k that corresponds to the number of natural clusters in the dataset. Fast convergence of the K-means algorithm. Existence of closely located data samples in the dataset. Question 5 1 pts Which of the following is a step in K-means algorithm implementation? (Select...

  • K-means clustering K-means clustering is a very well-known method of clustering unlabeled data. The simplicity of...

    K-means clustering K-means clustering is a very well-known method of clustering unlabeled data. The simplicity of the process made it popular to data analysts. The task is to form clusters of similar data objects (points, properties etc.). When the dataset given is unlabeled, we try to make some conclusion about the data by forming clusters. Now, the number of clusters can be pre-determined and number of points can have any range. The main idea behind the process is finding nearest...

  • 1. Implement the K-means algorithm using these two as a reference. 2.Use Matlab’s implementation of kmeans...

    1. Implement the K-means algorithm using these two as a reference. 2.Use Matlab’s implementation of kmeans to check your results on the fisheriris dataset (https://www.mathworks.com/help/stats/kmeans.html) a. The fisheriris dataset is built into Matlab, and you can load it using ‘load fisheriris’. b. Please note the labels are available for the dataset, so you can check the performance of the kmeans algorithm on the dataset. 274 14 Unsupervised Lnn Fig. 14.1 A two-dimensional domain with clusters of examples weight bot initial...

  • Data clustering and the k means algorithm. However, I'm not able to list all of the...

    Data clustering and the k means algorithm. However, I'm not able to list all of the data sets but they include: ecoli.txt, glass.txt, ionoshpere.txt, iris_bezdek.txt, landsat.txt, letter_recognition.txt, segmentation.txt vehicle.txt, wine.txt and yeast.txt. Input: Your program should be non-interactive (that is, the program should not interact with the user by asking him/her explicit questions) and take the following command-line arguments: <F<K><I><T> <R>, where F: name of the data file K: number of clusters (positive integer greater than one) I: maximum number...

ADVERTISEMENT
Free Homework Help App
Download From Google Play
Scan Your Homework
to Get Instant Free Answers
Need Online Homework Help?
Ask a Question
Get Answers For Free
Most questions answered within 3 hours.
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT