博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
what difference between libfm and libffm
阅读量:6715 次
发布时间:2019-06-25

本文共 4820 字,大约阅读时间需要 16 分钟。

https://www.kaggle.com/users/25112/steffen-rendle/forum

 

Congratulations to Yu-Chin, Wei-Sheng, Yong and Michael!

There have been several questions about the relationship between FM and FFM. Here are my thoughts about the differences and similarities.

Notation

  • m categorical variables (="fields")
  • k is the factorization dimension of FM
  • k' is the factorization dimension of FFM

Models (slightly simplified)

  • FM is defined as

      y(x) = sum_i sum_j>i 〈v_i,v_j〉 x_i x_j

  • FFM is defined as

      y(x) = sum_i sum_j>i 〈v^J_i,v^I_j〉 x_i x_j

The difference between both models is that FFM assumes that the factors between interactions (e.g. v_i of (I,J) and v_i of (I,L)) are independent whereas FM uses a shared parameter space.

Number of parameters and costs

  • FFM has k' * (m-1) parameters per predictor variable.
  • FM has k parameters per predictor variable.
  • FFM has a runtime complexity of k' * m * (m-1) / 2 = O(k' * m^2) per training example
  • FM has a runtime complexity of k * m = O(k * m) per training example (because the nested sums can be decomposed due to parameter sharing).

That means from a cost point of view, an FFM with dimensionality k' should be compared to an FM with an m times larger dimension, i.e. k=k'*m. With this choice both FFM and FM have the same number of parameters (memory costs) and the same runtime complexity.

Expressiveness

FFM and FM have different assumptions on the interaction matrix V. But given a large enough k and k', both can represent any possible second order polynomial model.

The motivation of FM and FFM is to approximate the (unobserved) pairwise interaction matrix W of polynomial regression by a low rank solution V*V^t = W. FM and FFM have different assumptions how V looks like. FFM assumes that V has a block structure:

         | v^2_1  v^1_2  0      0      0     |
         | v^3_1  0      v^1_3  0      0     |
         | v^4_1  0      0      v^1_4  0     |
V(FFM) = | v^5_1  0      0      0      v^1_5 |
         | 0      v^3_2  v^2_3  0      0     |
         | 0      v^4_2  0      v^2_4  0     |
         | ...                               |

FM does not assume such a structure:

V(FM) = | v_1  v_2  v_3 v_4 v_5 |

(Note that v are not scalars but vectors of length k' (for FFM) or of length k (for FM). Also to shorten notation, one entry v in the matrices above represents all the v vectors of a "field"/ categorical variable.)

If the assumption of FFM holds, then FFM needs less parameters than FM to describe V because FM would need parameters to capture the 0s.

If the assumption of FM holds, then FM needs less parameters than FFM to describe V because FFM would need to repeat values of vectors as it requires separate parameters.

 

==============================

You are very welcome!

(2) They are similar but not the same. The FM model in the paper you provided

is field unaware. The difference between equation 1 in the paper and the
formula on page 14 of our slide is that our w is not only indexed by j1 and
j2, but also indexed by f1 and f2. Consider the example on page 15, if
Rendle's FM is applied, it becomes:

w376^Tw248x376x248 + w376^Tw571x376x571 + w376^Tw942x376x942

+ w248^Tw571x248x571 + w248^Tw942x248x942
+ w571^Tw942x571x942

BTW, we use k = 4, not k = 2.

Please let me know if you have more questions. :)

 

Inspector wrote:

1) I think this helped a lot. I was confused what the hashing trick was doing. I was thinking perhaps the value of a feature, say 5a9ed9b0, was REPLACED by an integer. So, I understand now that one-hot-encoding is still being used, it is just the indexing of the data which is improved (memory wise) when hashed.

2) So this is essentially the same model as shown in the Rendle paper  (equation 1) , where you used k=2 (the number of factors)? Is this correct?

Thanks very much!!

 

Some hints about the usage of libFM:

@Kapil: The order of features in the design matrix has no effect on the model -- for sure you should use the same ordering in training/ test set and in each line of each file. Theoretically there might be a difference because the learning algorithm iterates from the first to the last feature. So changing the order, might change the convergence slightly.

about K2: The larger K2, the more complex the model gets. Usually, the larger K2, the better, but too large values can also overfit. So start with small values of K2 and increase it (e.g. double it) until you get the best quality (on your holdout set). Runtime depends linearly on K2.

about generating libFM files: If your data is purely categorical and in some kind of CSV or TSV format, you can also use the Perl-script in the "script/"-folder of libFM to generate libFM-compatible files.

about "linear regression" and libFM: A factorization machine (=FM) includes linear regression. E.g. if you choose K2=0, then libFM does exactly the same as linear regression. If you choose K2>0, then an FM is "linear regression + second order polynomial regression with factorized pairwise interactions".

转载于:https://www.cnblogs.com/zhizhan/p/5121525.html

你可能感兴趣的文章
DNG格式解析
查看>>
Windows 下搭建LDAP服务器
查看>>
2015年第8本(英文第7本):the city of ember 微光城市
查看>>
FZU操作系统课程实验 实验一
查看>>
【转】Android Activity和Intent机制学习笔记----不错
查看>>
Eclipse背景颜色修改
查看>>
linux下安装oracle11g 64位最简客户端(转)
查看>>
搭建XMPP协议,实现自主推送消息到手机
查看>>
基于FPGA的图像处理(二)--System Generator入门
查看>>
DIV+CSS 入门
查看>>
UVa 213 Message Decoding(World Finals1991,串)
查看>>
Everything search syntax
查看>>
BZOJ 3211 弗洛拉前往国家 树阵+并检查集合
查看>>
Mac OS X通过结合80port
查看>>
Windows下一个SlikSVN使用
查看>>
C#中隐式操作CMD命令行窗口
查看>>
拍卖倒计时
查看>>
Android使用surface直接显示yuv数据(三)
查看>>
vb.net它SqlHelper制备及应用
查看>>
HttpServletRequest接口实例化的使用
查看>>