Friday, January 9, 2015

Associative Rule Mining (ARM) using SAS In-Memory Statistics for Hadoop: A Start-up Example

In SAS Enterprise Miner, there are Market Basket Node and Association Node. In SAS In-Memory Statistics for Hadoop ("SASIMSH"), the statement ARM (Associative Rule Mining) covers most, if not all what the two nodes do inside Enterprise Miner. This post presents a start-up example on how to conduct ARM on SASIMSH. While it does not change much of  Market Basket Node and Association Node essentially do, you will see how fast SASIMSH can get the job done over 300 million rows of transaction over 12 months.

I focus on discussion of association based upon which, if you introduce temporal order of the transaction, you can easily extend /imagine into sequence.

The SASIMSH system used for this post is the same as the one used for my post dated 12/14/2014 "SAS High Performance Analytics and In-Memory Statistics for Hadoop: Two Genuine in-Memory Math Blades Working Together". Here are some info on the data set used.

The data set is simulated transaction data set consisting of 12 monthly transaction, 25 million transaction entries each, totaling 300 millions. The total size of the data set is ~125 GB. Below is monthly distribution.



T_weekday is how many transactions happen Sunday, Monday, Tuesday... Saturday. T_week counts how many transactions happen on week 1... week24....week52 on the year. These segment variables are created in case you want to break down your analysis.

Below is main body of the ARM modeling code


1. The two "Proc LASR" sections create LASR in-memory analytics process and load the analytics data set into it. The creation process took ~10 seconds and the loading process took ~15 seconds (see picture)


2. The Frequency statement simply profiles the variables the distributions of which I reported above.
3. The ARM statement  is where the core activities happen

  •  Item= is where you list the variable of product category. You have full control product hierarchy.
  •  Tran= is where you specify granular level of transaction data. There are ~9 million unique accounts for this exercise. If you choose to use a level that has, say, 260 unique level values (with proper corresponding product levels) you can easily turn the ARM facility into BI reporting tool, closer to IMSTAT's GROUPBY statement does.
  • You can use MAXITEMs= (and/or  MINITEMS) to customize item counts for compilation
  •  Freq = is simply order count of the item. While Freq = is more 'physical, accounting book weight'        (therefore less analytical, by definition), Weight= weighting is more analytical /intriguing. I used list price here, essentially compiling support  in terms of individual price importance, assuming away any differential price-item elasticity and a lot more. You can easily have a separate model to study this weight input alone, which is beyond the scope of this post. 
  •  The two aggregation options allow you to decide how item aggregation and ID aggregation should happen; if weight = is left blank, both aggregations ignore the aggregation= values you plug in and aggregate by default value of SUM, which is really to ADD UP. Ideally, one aggregation should use one weight variable. For now, if you specify weight=, the weigh variable is used for both. If you are really so 'weight' sensitive, you can run the aggregation one at a time, which does not  much more time and resources.
  •  The ITEMSTBL option asks  output of a temporary table to be created in-memory amid the flow for further actions during the in-memory process, the table system-reserved keyword  .&_tempARMItems_ refers to in the next step. This is different from what SAVE option generates. SAVE typically outputs table to Hadoop directory "when you are done".
  •  The list of options commented out in GREEN show that you can customize support output; you don't have to follow the same configurations when the ARM model was being fit above when generating rules or association scores.
4. Below is how some output looks like



  •  The _T_ table is the temporary table created. You can use PROMOTE statement to make it      permanent
  •  _SetSize_ simply tells number of products in the combinations.
  •  _Score_ is the result of your (double) aggregations. Since you can select one of 4 aggregation  
  •  options (SUM, MEAN, MIN, MAX) for either aggregation (ITEMAGG and AGG), you need to interpret the score according to your options.

5. This whole, while sounding cliche content wise, takes only ~8 minutes to finish over 300 million rows.


The gap between CPU time and real time is pretty large, but I care less since the overall is only 8 minutes.

12 comments:

  1. Hi, thank you for sharing this pretty useful blog, from the IT industry survey, Big Data/Analytics is the hot trend with the strong demand of talent in the upcoming years; I think you are making a great choice to pursue a Master degree in this area.
    Regards,
    SAS Training in Chennai|SAS Course in Chennai

    ReplyDelete
  2. Well defined. And good article..
    To learn SAS business analytics, check here
    SAS Training in Chennai| SAS Course in Chennai

    ReplyDelete
  3. Good Post! Thank you so much for sharing this pretty post, it was so good to read and useful to improve my knowledge as updated one, keep blogging…
    Regards,
    Informatica Training in Chennai | Informatica Course in Chennai

    ReplyDelete
  4. Thanks for your informative article. Android SDK allows you to create stunning mobile application loaded with more features and enhanced priority. With basis on Java coding language, you can create stunning mobile application with ease. Best Android Training in Chennai

    ReplyDelete
  5. Got a creative information. Understand well in this. This gives the easy technique of experiment. New technologies are developed more. so techniques are also improved. Thank you for this information.
    Android Training in Chennai

    ReplyDelete
  6. Truely a very good article on how to handle the future technology. After reading your post,thanks for taking the time to discuss this, I feel happy about and I love learning more about this topic. keep sharing your information regularly for my future reference. This content creates a new hope and inspiration with in me. Thanks for sharing article like this. The way you have stated everything above is quite awesome. Keep blogging like this. Thanks.
    Hadoop training in chennai

    ReplyDelete
  7. Wonderful blog.. Thanks for sharing informative blog.. its very useful to me..

    iOS Training in Chennai

    ReplyDelete
  8. As we heared more and more tips which enable us for writing for the content. But with this especially how the authoritative really nice. I agree with your 3 point include with more images and videos. It will enable the readers without any confused or any other thing and finally cleared with what we are going to tell.

    Home Spa Services in Mumbai

    ReplyDelete
  9. I am regular visitor of this blog .I am working as blog reviewer in a private press and I saw many useful posts here. Sure, I will give best ratings for this blog .Keep posting best posts like this to get top reviews and ratings from blog reviewers and people .And I am thankful for this valuable post.
    Nautical Science Colleges in Chennai | Mechanical Colleges in Chennai | ECE Colleges in Chennai

    ReplyDelete
  10. This comment has been removed by the author.

    ReplyDelete
  11. Hats off to your presence of mind..I really enjoyed reading your blog. I really appreciate your information which you shared with us.
    SAS Online Training
    Tableau Online Training|
    R Programming Online Training|


    ReplyDelete
  12. Interesting blog which attracted me more.this blog shows that you have a great future as a content writer.keep updating.
    Digital marketing company in Chennai

    ReplyDelete