Sunday, February 17, 2013

OAuth Complications

Recently, tackling an interesting problem regarding OAuth 2. We want to allow the user to be able to register via Google OAuth on a mobile app and simultaneously store their records on the server (email address etc.) In addition to that, we also need offline access via a token (refresh token). If the user registers via the mobile app, we need to grab this offline access token, send it to the server, which should return a cookie. Then from there, the mobile app can process as it normally does and push the requisite data to the server.

Update:
Seems like Google Play Services only allows you to get an access token (not a refresh token), which is a shame because the Account Chooser is a pretty slick feature (no need for the user to enter in a username or password). I added in a quick async task to check what the token is, spitting it out onto LogCat.


  // Fetches the token successfully and shows in LogCat as async thread
  private class fetchToken extends AsyncTask<Void, Void, Void> {

    /* (non-Javadoc)
     * @see android.os.AsyncTask#doInBackground(Params[])
     */
    @Override
    protected Void doInBackground(Void... params) {
      // TODO Auto-generated method stub
        try {
          String token = credential.getToken();
          Log.d("calendar", token);
        } catch (IOException exception) {
          // TODO Auto-generated catch block
          exception.printStackTrace();
        } catch (GoogleAuthException exception) {
          // TODO Auto-generated catch block
          exception.printStackTrace();
        }
        return null;
    }
  }

Returning:  ya29.AHE.....  (an access token). 
The challenge is to figure out how to get a refresh token for the server, as we cannot assume that the user will onboard on the web app first.

Sunday, February 10, 2013

Something New

Shipping something new, soon.

Memeja YCombinator Interview Experience


I wanted to collect my thoughts before posting on Memeja again.

Here's brief timeline of our progress:

  • Spring 2012:  Won $16,500 at Dartmouth's Entrepreneurship Competition: http://thedartmouth.com/2012/04/06/news/des
  • Summer 2012:  Hardcore development begins. Iterated different prototypes and interviewed students for market feedback.
  • Fall 2012: Moved to San Francisco with another co-founder. 
    • Demo'ed Memeja with UC Berkeley students. Iterated based on needs. 
    • 8-11 hours of coding everyday (according to RescueTime app).

In Nov 2012, Memeja interviewed for the YCombinator Winter 2013 batch. 
Here's our application video:  http://www.youtube.com/watch?v=hdpegsikzhI. (just one small video piece of a written app)


We read somewhere that  10% of applicants get an interview, so needless to say, we were excited. We started to dream, which I think anybody can relate to.  Because at that moment, our derivative seemed so positive.


YCombinator interviews are quite short: 10 minutes, in total.



-
What did we do to prepare?

We spent a majority of the 2 weeks before the interview talking to YCombinator alums for advice. We also spent lots of time coding new features based on our analysis of the market feedback. We drilled the common questions, ad nauseum, until we could reply to the following questions in 15 seconds or less:

  • How are you going to make money?
  • Who needs this application? How do you know they need it?
  • What is Memeja? etc. 

It turns out, none of this helped a great deal.  In retrospect, we should have spent the majority of the time actually spreading word about the product to gain enormous traction (we had < 100 users from UC Berkeley at that point).

Three days before the actual inteview, we went to the YCombinator HQ in Mountain View every day. I would recommend any prospective interviewee to do the same! It's absolutely amazing to see what other people are working on. We saw things from 3D printing vending machines to a Yelp for people.

We eventually saw that we were to interview with Paul Graham. One of my tech heroes! It was unbelievable. But we had also heard from other YC alums that PG was incredibly skeptical of Internet memes. Looking on the bright side, we thought that if we could convince PG, we could convince anybody.

When we were called for the actual interview, Max and I breathed each other in, which is a trick I learned from Acting class at Dartmouth. We make eye contact and breathe synchronously.

I remember walking into the interview room and shaking hands with Paul Graham and the other YCombinator partners.

-

The Actual Interview

Everything went by so quickly, it's hard to remember precisely what happened. Some questions I do remember...

  • Why aren't we based at Dartmouth instead of SF?  (we wanted to be in a startup hub)
  • Why hasn't this been done already?


I do remember Paul Graham scrolling through our live feed for at least a minute, saying nothing. That was the most intimidating portion of the interview.

Because Memeja is a social network based on memes, most of the rage comics are not under our control but rather, inside jokes between UC Berkeley students.

I remember all Paul Graham said at the end, scrolling through them was that "they were incomprehensible." Funny in retrospect, but quite nerve-wracking in the moment. As a consolation, PG said that he could see people using rage comics to send each other stories. He also commented that Dartmouth was "very hip" for awarding $16,500 to a memes startups.

-

Exiting the interview, Max and I agreed that the interview wasn't as intense as we thought it would be. Therein was the worry though!

I could be wrong (as I only have one data point) but I suspect that the intensity of the interview is a proxy for their interest. 


-

Overall, it was an interesting experience, an interview that was far different from other interviews I've been through.

Saturday, February 2, 2013

Reverse Engineering the Dartmouth Nutrition Menu Pt 2

From the first part, I had scraped an individual item's nutrition values and the daily meals. Now the priority is to be able to loop through all the elements in the daily meals and food it into some function that spits out the nutrition values. At the end of the loop, I should have all the nutrition information for the day.

Looking at the JSON, it seems each item is defined by a series of ids. The bigger picture is that with the food's id, I can extract its nutritional value with another request to the server. To get the id, I need to be able to parse the bigger JSON.

After examining the JSON, it seems that the items in the elements are referred by 2 id's: mm_id and mm_rank

Friday, February 1, 2013

Reverse Engineering the Dartmouth Nutrition Menu Pt 1

This is an interesting project. I have been experiencing this problem with weight gain at the gym. I know if I don't eat enough food, I won't be able to lift as much and see results. Thinking that we naturally optimize what we measure, I thought it would be cool to create a food diary that scrapes information from the Dartmouth Nutrition Menu.

I would be able to document what I'm eating and see the associated macronutrients at the end of every meal. I could set my own calorie goals through certain foods and optimize that process. This would personally translate to seeing progress at the gym.

Here is the project: https://github.com/deloschang/foco-nutrition-scraper

--

Understanding how the nutrition menu works is the first priority. On the surface, it would appear that the macronutrients listed on the menu are images. Viewing the source only shows convoluted javascript functions. When clicking on items on the nutrition menu, the server is polled by a JSON-RPC request and returns relevant information about the macronutrients: everything from calories to vitamin intake.

To get the nutrition menu, I copied the JSON-RPC request from FireBug and channeled it into a Python URLLib parameters. This returned a full JSON payload of the nutrients.

To grab the list of the day's meal, I do the same with different methods and parameters observed from Firebug. Now I have a list of the food items served that day and a way to grab nutrients for a specific item.

Next, understanding how to loop over all the nutrients and create a comprehensive list is practical. Together, I can loop over the list and scrape all the information I need, storing it in a separate database.

Then have an Android app poll for the data from my server and present it to the user with some UI. There can be input boxes next to each food item and users can enter in their food intake. The application can use the information to provide entire macronutrient statistics for the day. Then, for everyday, you can track your food consumption and potentially show you related graphs.

Once I create the app, I should hedge against polling the server with too much traffic and instead move the data on my own server. Then, when students access the data, I can handle the traffic load instead and my own application can poll by the server in small intervals by itself.