I am working on a project about data mining. my company has given me 6 million dummy customer info of facebook. I was assigned to find out the similarity between any two users. can anyone could give me some ideas how to deal with the large community data? Thanks in advance
Problem : I use the status info & hashtag info(hashtags are those words highlighted by user) as the two criteria to measure the similarity between two different users. Since the large number of users, and especially there may be millions of hastags & statuses of each user. Can anyone tell me a good way to fast calculate the similarity between two users? I have tried to use FTIDF to calculate the similarity between two different users, but it seems infeasible. can anyone have a very super algorithm or good ideas which could make me fast find all the similarities between users?
For example:
user A's hashtag = {cat, bull, cow, chicken, duck}
user B's hashtag ={cat, chicken, cloth}
user C's hashtag = {lenovo, Hp, Sony}
clearly, C has no relation with A, so it is not necessary to calculate the similarity to waste time, we may filter out all those unrelated user first before calculate the similarity. in fact, more than 90% of the total users are unrelated with a particular user. How to use hashtag as criteria to fast find those potential similar user group of A? is this a good idea? or we just directly calculate the relative similarity between A and all other users? what algorithm would be the fastest and customized algorithm for the problem?
2 Replies  1204 Views  Last Post: 03 December 2012  02:40 AM
#1
how to calculate the similarity of two users in facebook?
Posted 02 December 2012  07:19 PM
Replies To: how to calculate the similarity of two users in facebook?
#2
Re: how to calculate the similarity of two users in facebook?
Posted 02 December 2012  08:25 PM
Localitysensitive hashing is often used for similarity matching/nearest neighbor problems, comparing N objects in O(N) time. The idea is that if two pieces of data are similar, their hashes will be similar. Perhaps you could use the 100 or 1000 mostpopular hash tags as your highdimensional data, and then you can represent each person as a binary array, where a 1 at some index indicates they have listed whatever tag is associated with that index, and 0 otherwise. Unfortunately, LSH isn't a widely studied topic, so there's not a ton of easytodigest information on it. Here are a few sources that may help to get you started.
Reading:
Web Intelligence Lecture on LSH
Locality Sensitive Hashing and Large Scale Image Search
A theoretical treatment of Bayesian LSH and similarity search (direct link to PDF)
Implementations:
C#/C++ Libraries
Java Libraries
Reading:
Web Intelligence Lecture on LSH
Locality Sensitive Hashing and Large Scale Image Search
A theoretical treatment of Bayesian LSH and similarity search (direct link to PDF)
Implementations:
C#/C++ Libraries
Java Libraries
This post has been edited by blackcompe: 02 December 2012  08:28 PM
#3
Re: how to calculate the similarity of two users in facebook?
Posted 03 December 2012  02:40 AM
hi, blackcompe Thank you very much for your good suggestions.
Localitysensitive hashing is a very new concept for me, I will go to read it carefully. In fact, the idea is there, the only thing need to do is to modify the data structure to make it feasible to implement with this kind of huge, my main focus here is the time complexity and space complexity concern. The main purpose here is to find the related similarity between all the users, but in real facebook life, we may know only a very very small portion, it means there are actually a very very small group who have somewhat similarity with a particular person, if I directly calculate the similarity between each user, it will go through a lot of unnecessary similarity calculation, which waste a lot of time. So that I thought of filtering out all those unrelated users first, and find out that socalled related group first. The final step is to calculate the similarity between a particular person with his related group only, this way may be much faster than directly calculating the similarity. Use normal data structure:
Step 1: use an array to store the 6 million(6m)users
use link list to chain all the related users to a particular user
Problem: Search time complexity is huge. Time complexity = O(6m*6m*K)
K is the number of distinct hashtags a particular user has.
1. select a particular user, search all other users(6m)
2. there are 6 million users (6m)
3. each user may have a large amount of distinct hashtags (K)
From the analysis above, using normal data structure is infeasible. I will go to read
Localitysensitive hashing mentioned by blackcompe, Thank you very much again, bro.
Localitysensitive hashing is a very new concept for me, I will go to read it carefully. In fact, the idea is there, the only thing need to do is to modify the data structure to make it feasible to implement with this kind of huge, my main focus here is the time complexity and space complexity concern. The main purpose here is to find the related similarity between all the users, but in real facebook life, we may know only a very very small portion, it means there are actually a very very small group who have somewhat similarity with a particular person, if I directly calculate the similarity between each user, it will go through a lot of unnecessary similarity calculation, which waste a lot of time. So that I thought of filtering out all those unrelated users first, and find out that socalled related group first. The final step is to calculate the similarity between a particular person with his related group only, this way may be much faster than directly calculating the similarity. Use normal data structure:
Step 1: use an array to store the 6 million(6m)users
use link list to chain all the related users to a particular user
Problem: Search time complexity is huge. Time complexity = O(6m*6m*K)
K is the number of distinct hashtags a particular user has.
1. select a particular user, search all other users(6m)
2. there are 6 million users (6m)
3. each user may have a large amount of distinct hashtags (K)
From the analysis above, using normal data structure is infeasible. I will go to read
Localitysensitive hashing mentioned by blackcompe, Thank you very much again, bro.
blackcompe, on 02 December 2012  08:25 PM, said:
Localitysensitive hashing is often used for similarity matching/nearest neighbor problems, comparing N objects in O(N) time. The idea is that if two pieces of data are similar, their hashes will be similar. Perhaps you could use the 100 or 1000 mostpopular hash tags as your highdimensional data, and then you can represent each person as a binary array, where a 1 at some index indicates they have listed whatever tag is associated with that index, and 0 otherwise. Unfortunately, LSH isn't a widely studied topic, so there's not a ton of easytodigest information on it. Here are a few sources that may help to get you started.
Reading:
Web Intelligence Lecture on LSH
Locality Sensitive Hashing and Large Scale Image Search
A theoretical treatment of Bayesian LSH and similarity search (direct link to PDF)
Implementations:
C#/C++ Libraries
Java Libraries
Reading:
Web Intelligence Lecture on LSH
Locality Sensitive Hashing and Large Scale Image Search
A theoretical treatment of Bayesian LSH and similarity search (direct link to PDF)
Implementations:
C#/C++ Libraries
Java Libraries
Page 1 of 1
