CS5243 - Data Science

123456789101112131415161718192021222324252627282930
Across
  1. 1. , A feature vectorization process in NLP
  2. 4. Ratio of correctly predicted +ves to the total number of +ves
  3. 6. , Lemmatization uses __________ to modify the words
  4. 9. , collection of all documents, is called
  5. 11. , Distance measure used by k-NMN to train and test the categorical features is
  6. 13. , A classifier model performs better on training, but poorly on testing is referred to
  7. 14. , K-NN is an example of ____________ learning
  8. 16. , A Public corpora
  9. 19. , P(X/C) in Naive Bayes is termed to be
  10. 23. , Sigmoid activation function transforms linear combined data to ___________ form
  11. 25. , Method used to train the model with N-1 samples and tested with 1 sample
  12. 26. , Type-I error in the confusion matrix refers to _______ value
  13. 27. , Soft margin SVM additionally has ________ variable with hard margin SVM
  14. 29. , ID3 is sensitive to number of ____________ attribute values
  15. 30. , Eye color is an example of ____________ datatype
Down
  1. 2. , Likert scale is an example of __________ data
  2. 3. , Ratio of predicted +ves to the total number of +ves
  3. 5. , Value of k in k-NN is determined using
  4. 7. , Cancer Vs Non-Cancer is an example _________ binary datatype
  5. 8. , cosine score equals to zero represents the two vectors are
  6. 10. , P(X) in Naive Bayes is termed to be
  7. 12. , Binary classifier sensitive to noise
  8. 15. , Method used to handle the continuous data in Naiva Bayes
  9. 17. , A constant use to increase/ decrease the net input value in logistic regression
  10. 18. , Entropy of the dataset having binary class labels can have the value > 1. State True or False
  11. 20. , Generalized distance measure of L1 and L2 norm
  12. 21. , Size of the confusion matrix, if the dataset having 600 features with N class labels are used for training and testing
  13. 22. , TF-IDF uses ________ matrix for large vocabulary
  14. 24. , identifies the unique or rare occurrence of the words in the documents
  15. 28. , L1 norm is also called as