< 目次

インテル® DAAL 2017 デベロッパー・ガイド

文献目録 (英語)

インテル® DAAL に実装されているアルゴリズムの詳細は、次の文献を参照してください。

[Agrawal94]

Rakesh Agrawal, Ramakrishnan Srikant. Fast Algorithms for Mining Association Rules. Proceedings of the 20th VLDB Conference Santiago, Chile, 1994.

[Ben05]

Ben-Gal I. Outlier detection. In: Maimon O. and Rockach L. (Eds.) Data Mining and Knowledge Discovery Handbook: A Complete Guide for Practitioners and Researchers", Kluwer Academic Publishers, 2005, ISBN 0-387-24435-2.

[Biernacki2003]

C. Biernacki, G. Celeux, and G. Govaert. Choosing Starting Values for the EM Algorithm for Getting the Highest Likelihood in Multivariate Gaussian Mixture Models. Computational Statistics & Data Analysis, 41, 561-575, 2003.

[Billor2000]

Nedret Billor, Ali S. Hadib, and Paul F. Velleman. BACON: blocked adaptive computationally efficient outlier nominators. Computational Statistics & Data Analysis, 34, 279-298, 2000.

[Bishop2006]

Christopher M. Bishop. Pattern Recognition and Machine Learning, p.198, Computational Statistics & Data Analysis, 34, 279-298, 2000. Springer Science+Business Media, LLC, ISBN-10: 0-387-31073-8, 2006.

[Boser92]

B. E. Boser, I. Guyon, and V. Vapnik. A training algorithm for optimal marginclassifiers.. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, pp: 144–152, ACM Press, 1992.

[Byrd2015]

R. H. Byrd, S. L. Hansen, Jorge Nocedal, Y. Singer. A Stochastic Quasi-Newton Method for Large-Scale Optimization, 2015. arXiv:1401.7020v2 [math.OC]. Available from http://arxiv.org/abs/1401.7020v2.

[bzip2]

http://www.bzip.org/

[Dempster77]

A.P.Dempster, N.M. Laird, and D.B. Rubin. Maximum-likelihood from incomplete data via the em algorithm. J. Royal Statist. Soc. Ser. B., 39, 1977.

[Fan05]

Rong-En Fan, Pai-Hsuen Chen, Chih-Jen Lin. Working Set Selection Using Second Order Information for Training Support Vector Machines.. Journal of Machine Learning Research 6 (2005), pp: 1889–1918.

[Fleischer2008]

Rudolf Fleischer, Jinhui Xu. Algorithmic Aspects in Information and Management. 4th International conference, AAIM 2008, Shanghai, China, June 23-25, 2008. Proceedings, Springer.

[Freund99]

Yoav Freund, Robert E. Schapire. Additive Logistic regression: a statistical view of boosting. Journal of Japanese Society for Artificial Intelligence (14(5)), 771-780, 1999.

[Freund01]

Yoav Freund. An adaptive version of the boost by majority algorithm. Machine Learning (43), pp. 293-318, 2001.

[Friedman00]

Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Additive Logistic regression: a statistical view of boosting. The Annals of Statistics, 28(2), pp: 337-407, 2000.

[Hastie2009]

Trevor Hastie, Robert Tibshirani, Jerome Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Second Edition (Springer Series in Statistics), Springer, 2009. Corr. 7th printing 2013 edition (2011/12/23).

[Hsu02]

Chih-Wei Hsu and Chih-Jen Lin. A Comparison of Methods for Multiclass Support Vector Machines. IEEE Transactions on Neural Networks, Vol. 13, No. 2, pp: 415-425, 2002.

[Iba92]

Wayne Iba, Pat Langley. Induction of One-Level Decision Trees. Proceedings of Ninth International Conference on Machine Learning, pp: 233-240, 1992.

[Ioffe2015]

Sergey Ioffe, Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, arXiv:1502.03167v3 [cs.LG] 2 Mar 2015, available from http://arxiv.org/pdf/1502.03167.pdf.

[Joachims99]

Thorsten Joachims. Making Large-Scale SVM Learning Practical. Advances in Kernel Methods - Support Vector Learning, B. Schölkopf, C. Burges, and A. Smola (ed.), pp: 169 – 184, MIT Press Cambridge, MA, USA 1999.

[Krizh2012]

Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. Available from http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf.

[LeCun15]

Yann LeCun, Yoshua Bengio, Geoffrey E. Hinton. Deep Learning. Nature (521), pp. 436-444, 2015.

[Lloyd82]

Stuart P Lloyd. Least squares quantization in PCM. IEEE Transactions on Information Theory 1982, 28 (2): 1982pp: 129–137.

[lzo]

http://www.oberhumer.com/opensource/lzo/

[Maitra2009]

Maitra, R. Initializing Optimization Partitioning Algorithms. ACM/IEEE Transactions on Computational Biology and Bioinformatics 2009, 6 (1): pp: 144-157.

[Mu2014]

Mu Li, Tong Zhang, Yuqiang Chen, Alexander J. Smola. Efficient Mini-batch Training for Stochastic Optimization, 2014. Available from https://www.cs.cmu.edu/~muli/file/minibatch_sgd.pdf.

[Renie03]

Jason D.M. Rennie, Lawrence, Shih, Jaime Teevan, David R. Karget. Tackling the Poor Assumptions of Naïve Bayes Text classifiers. Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003), Washington DC, 2003.

[rle]

http://data-compression.info/Algorithms/RLE/index.htm

[Rumelhart86]

David E. Rumelhart, Geoffrey E. Hinton, Ronald J. Williams. Learning representations by back-propagating errors. Nature (323), pp. 533-536, 1986.

[Sokolova09]

Marina Sokolova, Guy Lapalme. A systematic analysis of performance measures for classification tasks. Information Processing and Management 45 (2009), pp. 427–437. Available from http://atour.iro.umontreal.ca/rali/sites/default/files/publis/SokolovaLapalme-JIPM09.pdf.

[Szegedy13]

Christian Szegedy, Alexander Toshev, Dumitru Erhan. Scalable Object Detection Using Deep Neural Networks. Advances in Neural Information Processing Systems, 2013.

[West79]

D.H.D. West. Updating Mean and Variance Estimates: An improved method. Communications of ACM, 22(9), pp: 532-535, 1979.

[Wu04]

Ting-Fan Wu, Chih-Jen Lin, Ruby C. Weng. Probability Estimates for Multi-class Classification by Pairwise Coupling. Journal of Machine Learning Research 5, pp: 975-1005, 2004.

[zLib]

http://www.zlib.net/