Long short-term memory based movie recommendation

ABSTRACT

Recommender systems (RS) have become a fundamental tool for helping users make decisions

around millions of different choices nowadays – the era of Big Data. It brings a huge benefit for

many business models around the world due to their effectiveness on the target customers. A lot

of recommendation models and techniques have been proposed and many accomplished incredible outcomes. Collaborative filtering and content-based filtering methods are common, but these

both have some disadvantages. A critical one is that they only focus on a user's long-term static

preference while ignoring his or her short-term transactional patterns, which results in missing the

user's preference shift through the time. In this case, the user's intent at a certain time point may be

easily submerged by his or her historical decision behaviors, which leads to unreliable recommendations. To deal with this issue, a session of user interactions with the items can be considered as

a solution. In this study, Long Short-Term Memory (LSTM) networks will be analyzed to be applied

to user sessions in a recommender system. The MovieLens dataset is considered as a case study

of movie recommender systems. This dataset is preprocessed to extract user-movie sessions for

user behavior discovery and making movie recommendations to users. Several experiments have

been carried out to evaluate the LSTM-based movie recommender system. In the experiments,

the LSTM networks are compared with a similar deep learning method, which is Recurrent Neural

Networks (RNN), and a baseline machine learning method, which is the collaborative filtering using item-based nearest neighbors (item-KNN). It has been found that the LSTM networks are able

to be improved by optimizing their hyperparameters and outperform the other methods when

predicting the next movies interested by users

Long short-term memory based movie recommendation trang 1

Trang 1

Long short-term memory based movie recommendation trang 2

Trang 2

Long short-term memory based movie recommendation trang 3

Trang 3

Long short-term memory based movie recommendation trang 4

Trang 4

Long short-term memory based movie recommendation trang 5

Trang 5

Long short-term memory based movie recommendation trang 6

Trang 6

Long short-term memory based movie recommendation trang 7

Trang 7

Long short-term memory based movie recommendation trang 8

Trang 8

Long short-term memory based movie recommendation trang 9

Trang 9

pdf 9 trang xuanhieu 5620
Bạn đang xem tài liệu "Long short-term memory based movie recommendation", để tải tài liệu gốc về máy hãy click vào nút Download ở trên

Tóm tắt nội dung tài liệu: Long short-term memory based movie recommendation

Long short-term memory based movie recommendation
he training set.
+ Testing set is also used in evaluation. Specifically,
unlike classification problem, evaluation in RS model
plays the same role as unsupervised learning. In
fact, there is no ensure that the list of movies recom-
mended to user is correct or not. Thus, the used eval-
uation method is a statistical measure to visualize the
distribution of the testing test and validation test.
Choosing a golden splitting ratio is an indispensable
process in data splitting. It is usually called a cross val-
idation process. The 80 – 20 or 90 – 10 (percentage)
has been a golden ratio in both theoretical and prac-
tical applications. In this study, the 80% for training
data, 10% for testing data and 10% for validation data
are applied for cross validation.
Framework for detecting the appropriate
data and evaluating the learning model in
themovie recommender system
In this study, the MovieLens dataset 13 with approxi-
mately 10 million interactions is used. At the begin-
ning, from the movie dataset, some features are cho-
sen to build the proposed RS. Then, data preprocess-
ing is applied to clean and convert user sessions to se-
quences. After that, the dataset splitting is applied to
define three sets. The training set is fed into themodel
and do some tests to choose the best hyperparameter
which can minimize the loss. Finally, the evaluation
on the validation and testing sets by using Mean Re-
ciprocal Rank (MRR) and other evaluation metrics as
Precision@k, Recall@k and F1-Score@k is to confirm
the used model is effective or not. The overall work-
flow of detecting the best model and the evaluation is
shown in Figure 3.
In this study, Mean Reciprocal Rank (MRR) is used
as a statistical method evaluating the model. In gen-
eral, the Reciprocal Rank information retrieval mea-
sure calculates the reciprocal of the rank at which the
first relevant item was retrieved 14. When averaged
across queries, the MRR is calculated. The formula
of MRR is described as follows:
MRR=
1
jQj
jQj
å
i=1
1
ranki
Where ranki refers to the rank position of the first rel-
evant item in the ith query; Q is the number of items.
The model computes MRR score for both validation
and testing. Then, the model is evaluated to be good
when MRR scores given on both testing and vali-
dation set is approximately same. Other evaluation
metrics used in this approach is Precision@k and Re-
call@k14. In comparison to MRR, these metrics care
on k highest ranking items, which are the reasonable
evaluation measures for emphasizing returning more
relevant items earlier. The key point of this method is
SI4
Science & Technology Development Journal – Engineering and Technology, 3(S1):SI1-SI9
to take all the precisions at k for each the sequence in
the testing set. More details, the sequence of length n
is splitted into two parts: the sequence of length k for
comparison and the other sequence of length n – k put
into the predicting function, and then the sorted fac-
torization matrix scores are retrieved. If any item in
top k highest scores matches the one in the sequence
of length k, the number of hits is increased by 1. Then,
the precision at k for one sequence of length n is given
by the number of hits divided by k, which stands for
the number of recommended items. For the recall, k
stands for the number of relevant items. In facts, k in
recall is usually smaller than the one in precision. Fi-
nally, the mean average precision and recall at k are
calculated for all sequences in the testing set. In gen-
eral, the formulas of the precision, recall and F1-score
at k are described as follows.
Precision@k = relevant_items in top kk
Recall@k = relevant_items in top krelevant_items
F1 score@k = 2 Precision@kRecall@kPrecision@k+Recall@k
Figure 3: Overall workflow of detecting the best
model for the movie recommender system
LSTMmodel Optimization
Hyperparameter optimization for LSTM
model.
LSTM hyperparameter
There is not a good methodology to choose an ideal
hyperparameter for the neural network up to now.
Thus, the more trials, the better results are for the
model. In this study, an automatically testing pro-
gramwith some randomhyperparameters is built and
takes three days consecutively to find the best hy-
perparameter. Some hyperparameters used in the
configuration are embedding dimension, number of
epochs, random state (shuffling number of interac-
tions), learning rate, batch size... The loss function is
kept same in the experiments.
Loss function
Several loss functions are applied to find the most ap-
propriatemodel, the formulas of them are listed in Ta-
ble 1.
Model efficiency evaluation
In this study, LSTM, RNN and another baseline
method are chosen to compare the evaluation met-
rics. The common baseline is Item-KNN, which con-
siders the similarity between the vectors of sessions.
This baselinemethod is one of themost popular item-
to-item solutions in practical systems. The MRR, Av-
erage Precision, Average Recall at 20 are measured to
find out the efficiency of the LSTM versus the others.
EXPERIMENTAL RESULTS AND
EVALUATION
Hyperparameter optimization
The experiment is taken by running 10 trials on the
randomly selected hyperparameters which are de-
fined in the fixed list as follows:
+ Learning rate: [0.001, 0.01, 0.1, 0.05]
+ l2: [1e-6, 1e-5, 0, 0.0001, 0.001]
+ Embedding dimension: [8, 32, 64, 128, 256]
+ Batch size: [8, 16, 32, 64, 128]
The loss function used in this experiment is BPR and
the number of epochs for each trail equals 10. The loss
sorted results is shown in Table 2.
The best result of the experiment is chosen for the
model. In facts, the model requires a big system for
training faster with more epochs, but the current one
is not enough for training longer. Therefore, there
should be more time for training a complete model.
Loss function
This experiment compares the efficiency of some loss
functions mentioned in the previous section. The hy-
perparameter is chosen from the best one in the pre-
vious experiment. The results of the experiment on
the training set in four types of loss after 10 epochs
are illustrated in Figure 4.
SI5
Science & Technology Development Journal – Engineering and Technology, 3(S1):SI1-SI9
Table 1: Loss function formulas for themodel
Pointwise L=
p
positive_loss+negative_loss
positive_loss = 1 - sigmoid(pos_pred)
negative_loss=sigmoid(neg_pred)
Hinge L=
p
maxf0; (neg_pred  pos_pred) +1g
Adaptive Hinge L=
p
maxf0; (neg_pred  hightest_pos_pred) +1g
Bayesian Personalized Ranking (BPR) L=
p
1 sigmoid(pos_pred  neg_pred)
neg_pred: negative_prediction;
pos_pred: positive_prediction
Figure 4: Loss function performance on the model
Table 2: Sorted loss results for several
hyperparameters test
Batch
size
Embedding
dimen-
sion
Learning
rate
L2 loss
128 64 0.001 0.0001 0.1962
64 64 0.01 0.0001 0.2491
8 8 0.01 1e-6 0.2537
16 256 0.1 1e-5 0.2701
16 32 0.05 0.001 0.271
16 128 0.01 0 0.276
16 64 0.001 1e-6 0.281
32 32 0.05 1e-5 0.2944
16 128 0.1 0.001 0.3566
8 128 0.05 0 0.4204
According to the graph in Figure 4, theBPRandHinge
loss can minimize the loss better than others can. Es-
pecially BPR, which can run the loss well as the be-
ginning and it looks more stable than Hinge in reg-
ularization, as the comparison in training and test-
ing in Figure 5. Therefore, BPR is chosen to perform
the model instead of Hinge, Pointwise and Adaptive
Hinge.
Evaluation results
The evaluation results between LSTM, RNN and the
baseline Item-KNNmethod is shown in Table 3.
According to the experiment results, both LSTM
and RNN can perform better than the baseline
method (Item-KNN).Moreover, LTSM performs well
in building a RS with specific relevant movies. LSTM
can forget what it thinks to be not necessary for the
long-term. Therefore, the learning results of LSTM
are more updatable by time and frequency of interac-
tions. Overall, LSTM is a better model for building
the session-based RS.
SI6
Science & Technology Development Journal – Engineering and Technology, 3(S1):SI1-SI9
Figure 5: Good fit on training and testing by BPR loss function
Table 3: RNN on the testing set
DLModel Precision
@20
Recall
@20
F1-Score@20 MRR
(test/validation)
LSTM 0.705 0.707 0.706 0.239/0.235
RNN (dropout
=0.2)
0.598 0.614 0.606 0.224/0.204
Item-KNN 0.506 0.507 0.506 0.201/0.206
DISCUSSION
In our approach, the modernmodel of recurrent neu-
ral networks, i.e., LSTM, is applied to do themovie RS
with the task of session-based recommendations. Be-
sides, themodification of LSTM in order to fit it better
is performed by using session-parallel mini-batches
and ranking losses. The evaluation results have shown
the outstanding improvement in comparison with the
popular baseline approach. The movie dataset pro-
vided by GroupLens is excellent for researching on
some principal features. Thus, this approach can be
applied not only for movie data, but also for some
other practical fields.
CONCLUSIONS
In conclusions, the LSTM-based movie RS has been
proposed and can achieve higher recommendation
performance when optimizing the hyperparameters
of LSTM, using loss function and optimization func-
tion. The loss is calculated to keep decreasing af-
ter each epoch and keep as minimum as possible for
the long-term computation. Adam optimizer plays a
great role in modifying the hyperparameter. More-
over, LSTM has been proved as the better model than
RNN during evaluation. Despite the results of both
are not good enough, but this study has presented
some solutions to improve the accuracy of the learn-
ing model.
ACKNOWLEDGMENTS
This research is funded by Vietnam National Foun-
dation for Science and Technology Development
(NAFOSTED) under grant number: 06/2018/TN.
ABBREVIATIONS
RS: Recommender Systems
LSTM: Long Short-Term Memory
RNN: Recurrent Neural Networks
DL: Deep Learning
KNN: K-Nearest Neighbors
BPR-MF: Bayesian Personalized Ranking – Matrix
Factorization
SI7
Science & Technology Development Journal – Engineering and Technology, 3(S1):SI1-SI9
SGD: Stochastic Gradient Descent
MRR: Mean Reciprocal Rank
COMPETING INTERESTS
The authors hereby declare that there is no conflict of
interest in the publication of the article.
AUTHORS’ CONTRIBUTION
Duy Bao Tran is involved in proposing and imple-
menting solutions, and writing reports.
Thi Thanh Sang Nguyen has giving ideas and so-
lutions, assess experimental results and writing the
manuscript.
REFERENCES
1. Aggarwal CC. Recommender Systems. Springer International
Publishing. 2016;Available from: https://doi.org/10.1007/978-
3-319-29659-3.
2. Agrawal R, Srikant R. Fast Algorithms for Mining Association
Rules in Large Databases. Proceedings of the 20th Interna-
tional Conference on Very Large Data Bases. 1994;p. 487–499.
3. Nguyen TTS, Lu H, Tran TP, Lu J. Investigation of Sequen-
tial Pattern Mining Techniques for Web Recommendation.
International Journal of Information and Decision Sciences
(IJIDS). 2012;p. 293–312. Available from: https://doi.org/10.
1504/IJIDS.2012.050378.
4. Vieira A. Predicting online user behaviour using deep learning
algorithms. arXiv:151106247v3. 2016;.
5. Devooght R, Bersini H. Collaborative Filtering with Recurrent
Neural Networks. arXiv:160807400. 2016;.
6. Hidasi B, Karatzoglou A, Baltrunas L, Tikk D. Session-based
Recommendations with Recurrent Neural Networks. 2015;.
7. GravesA. Supervised Sequence LabellingwithRecurrentNeu-
ral Networks. Springer-Verlag Berlin Heidelberg. 2012;Avail-
able from: https://doi.org/10.1007/978-3-642-24797-2.
8. Patterson J, Gibson A. Deep Learning-A Practitioner’s Ap-
proach. O’Reilly Media, Inc. 2017;.
9. Rendle S, Freudenthaler C, Gantner Z, Schmidt-Thieme L. BPR:
Bayesian personalized ranking from implicit feedback. Pro-
ceedings of the Twenty-Fifth ConferenceonUncertainty inAr-
tificial Intelligence, Montreal, Quebec, Canada,. 2009;p. 452–
461.
10. KingmaDP, Ba J. Adam: AMethod for StochasticOptimization.
2015;.
11. Duchi J, Hazan E, Singer Y. Adaptive Subgradient Methods for
Online Learning and Stochastic Optimization. J Mach Learn
Res. 2011;12:2121–2159.
12. Hinton G, Srivastava N, Swersky K. Lecture 6d- a separate,
adaptive learning rate for each connection. Slides of Lecture
Neural Networks for Machine Learning. 2012;.
13. Grouplens. 2019;Available from: https://grouplens.org/
datasets/movielens/.
14. Liu L, Ozsu MT. Encyclopedia of Database Systems.
Springer. 2009;Available from: https://doi.org/10.1007/978-0-
387-39940-9.
SI8
Tạp chí Phát triển Khoa học và Công nghệ – Engineering and Technology, 3(S1):SI1-SI9
Open Access Full Text Article Bài Nghiên cứu
Khoa Công nghệ Thông tin, Trường Đại
học Quốc tế, ĐHQG-HCM, Việt Nam
Liên hệ
Nguyễn Thị Thanh Sang, Khoa Công nghệ
Thông tin, Trường Đại học Quốc tế,
ĐHQG-HCM, Việt Nam
Email: nttsang@hcmiu.edu.vn
Lịch sử
 Ngày nhận: 10-8-2019
 Ngày chấp nhận: 22-8-2019 
 Ngày đăng: 19-9-2019
Bản quyền
© ĐHQG Tp.HCM. Đây là bài báo công bố
mở được phát hành theo các điều khoản của
the Creative Commons Attribution 4.0
International license.
Đề xuất phim dựa trên bộ nhớ ngắn hạn
Trần Duy Bảo, Nguyễn Thị Thanh Sang*
Use your smartphone to scan this
QR code and download this article
TÓM TẮT
Hiện nay, các hệ thống đề xuất đã trở thànhmột công cụ cơ bản để giúp người dùng đưa ra quyết
định trong hàng triệu lựa chọn khác nhau - kỷ nguyên của Dữ liệu lớn. Nó mang lại lợi ích rất
lớn cho nhiều mô hình kinh doanh trên toàn thế giới do hiệu quả của chúng đối với khách hàng.
Rất nhiều mô hình và kỹ thuật khuyến nghị đã được đề xuất và có nhiều kết quả đáng kinh ngạc.
Phương pháp lọc cộng tác và phương pháp lọc dựa trên nội dung là phổ biến, nhưng cả hai đều
có một số nhược điểm. Một điều quan trọng là chúng chỉ tập trung vào sở thích tĩnh dài hạn của
một người dùng trong khi bỏ qua các mẫu giao dịch ngắn hạn, dẫn đến bỏ sót sự thay đổi sở thích
của người dùng trong suốt thời gian. Trong trường hợp này, mối quan tâm của người dùng ở một
thời điểm nhất định có thể dễ dàng che mờ bởi các hành vi quyết định trong lịch sử của người đó,
dẫn đến các khuyến nghị không đáng tin cậy. Để giải quyết vấn đề này, một phiên tương tác của
người dùng với các mục có thể được coi là một giải pháp. Trong nghiên cứu này, các mạng Bộ nhớ
ngắn hạn (LSTM) sẽ được phân tích để áp dụng cho các phiên của người dùng trong hệ thống đề
xuất. Bộ dữ liệu MovieLens được dùng trong nghiên cứu hệ thống tư vấn phim. Một số thí nghiệm
được thực hiện để đánh giá hệ thống đề xuất phim dựa trên LSTM.
Từ khoá: Học sâu, Bộ nhớ ngắn hạn, Hệ thống đề xuất, Khai thác chuỗi
Trích dẫn bài báo này: Bảo T D, Sang N T T. Đề xuất phim dựa trên bộ nhớ ngắn hạn. Sci. Tech. Dev. J. 
-Eng. Tech.; 3(S1):SI1-SI9.
SI9
DOI : 10.32508/stdjet.v3iSI1.540 

File đính kèm:

  • pdflong_short_term_memory_based_movie_recommendation.pdf