Atlanta Real Estate Sign Installations Done Right!

I’m excited about my latest venture — We provide what we believe is by far the best sign & post installation service in the Metro Atlanta Area. Our mobile app makes it super-easy for real estate agents to order sign installs. Plus, the app allows us to notify agents instantly (with photos) upon service completion. We also allow you to see and manage your inventory of signs, toppers & riders within the app.

If you’d like to try our service, just download the app and sign up. As a bonus, here’s a discount / referral code, which will give you $25 (50%) off of your first installation order. We’re confident that once you try our service you won’t want to go back to the old way of doing things. There’s much more detail about the service on our website, including a gallery of our installations and our service area map.

Slow Responses from the BrainTree Ruby Gem? Try This Fix.

A few weeks ago I was tasked with trying to mitigate some timeout issues in a client’s Rails app making BrainTree calls. This was becoming more of a problem as the client’s users built up more & more history in BrainTree. Apparently you can’t paginate the results or ask BrainTree to exclude certain parts of the response via the API. So you can end up getting two years worth of transaction history that you don’t even care about attached to the piece of data that you do care about. As you’ll see, parsing that potentially big ball of XML can become a problem.

I started by outputting timestamps of the interactions with BrainTree to see if the slowness was on our side or theirs. For many calls it was slow on both ends. As an example, it might take 20 seconds for BrainTree to respond with the XML for the request and then another 28 seconds(!) for the BrainTree gem to parse that response. My client’s server was set to issue a timeout after 45 seconds, so you can see how this was a problem (besides the fact that we wouldn’t want the users to have to wait so long for a response).

As I dug a little deeper I discovered that the gem *should* use the speedy LibXML gem instead of the default REXML to do XML parsing. Unfortunately, we hadn’t installed LibXML. So I installed and configured LibXML but I still got the same results. So then I dug into the BrainTree gem’s code and discovered that there was a bug which was preventing it from finding LibXML.

The problem was a simple typo — the gem was looking for “::LibXml::XML” but it should have been checking for “::LibXML::XML”. See the difference? The ‘M’ and the ‘L’ need to be capitalized.

So I changed the gem’s code and ran my test again. This time it still took the same amount of time for BrainTree to send us the XML response but the parsing only took 2 seconds instead of 28 seconds.

I’ve submitted a pull request to BrainTree for this fix. You can see my commit here.

Calculating Standard Deviations in Ruby on Rails (and PostgreSQL)

I need to calculate some Bollinger Bands (BBs) for SwingTradeBot, which is built in Rails 4. Here’s a quick definition of Bollinger Bands:

Bollinger BandsĀ® are volatility bands placed above and below a moving average. Volatility is based on the standard deviation, which changes as volatility increases and decreases.

So I needed to do some standard deviation calculations. I found a few Ruby gems that allow you to do statistics but I quickly ran into issues with them. The general approach of the gems is to monkey patch Array and/or Enumerable, which can cause other issues. I was getting conflicts with ActiveRecord b/c of the monkey patches redefining “sum” and there was another conflict with a different gem that I tried. There are supposedly fixes for this stuff but it just felt dirty.

Then, as I often do, I wondered if I could just get the database to do the calculation for me. If so, it would be faster that way and I wouldn’t have to go monkey patching Ruby and/or clutter my app with my own standard deviation code. It was a pretty simple thing to have PostgreSQL do the calc for me. I just needed Rails to produce a query like this:

SELECT stddev_pop(close_price) FROM prices
WHERE (stock_id = 3313 and day_number > 195 and day_number <= 215)

Seems simple enough. So here's the Rails code to do just that:

result ='stddev_pop(close_price)').where("stock_id = #{stock_id} and day_number > #{day_number - 20} and day_number <= #{day_number}").load
#Note that I couldn't do ".first" on the line above b/c that creates an ORDER By clause that PostgreSQL complains about b/c the column being ordered by is not in the GROUP clause...
standard_deviation = result.first.stddev_pop
self.upper_bb = twenty_day_moving_average + (standard_deviation * 2)
self.lower_bb = twenty_day_moving_average - (standard_deviation * 2)


Ruby / Rails Memoization Gems Memoist vs. Memoizable

I was just adding some memoization to a Rails app and I was exploring the available gems. I’d used Memoist in the past on another project but I couldn’t remember why I chose it over other gems.

While researching today I found the Memoizable gem and thought that it looked pretty good. It has all these nice badges on the GitHub page, like a CodeClimate score of 4.0. So I figured I’d go with Memoizable.

After installing it and memoizing some methods I realized why I went with Memoist in the past. Memoizable won’t let you memoize methods that take parameters. If you try to do so it will complain loudly with “Cannot memoize Class#method_name, its arity is 1”.

That was a non-starter for me. I switched to Memoist and all is well.

I’ve Finally Found a Rails 4.x Blogging Engine / Gem

I can’t believe how difficult it’s been to find a good solution for plugging a simple blog into an existing Rails app. I wanted to add a blog to SwingTradeBot, the new site I’m building but most answers to this question that I’ve found say to either use RefineryCMS or “roll your own. Well I tried Refinery and quickly ran into gem conflicts galore. As for rolling my own… I don’t have time for that — I’d rather use something that’s been thought through and is well suited to the task.

I was ready to give up and just roll my own when I found the Monologue gem. That looked really promising but then I ran into a Rails 4 compatibility issue. However, reading through the discussion thread on that issue I discovered that somebody had created the Blogo gem (plugin / engine).

It’s still early days with this gem but so far, so good for the most part. Installation and set-up went smoothly (in development mode). Here are some things I ran into after pushing to production (on Heroku):

  1. There’s a generator to create the admin user (rake blogo:create_user[user_name,,password]) – that didn’t work in production. I didn’t find out until after creating the user manually in a Rails console that I needed to prepend ‘RAILS_ENV=production” to the rake command.
  2. The assets were missing. running “RAILS_ENV=production rake assets:precompile” fixed that.
  3. Note that for comment to appear you need to be signed up for Disqus and you need to enter your site’s short name into the Blogo config.
  4. There are some configuration options that I had to discover via digging through the code. See below for an example of what I’ve added to my config/application.rb

Here’s what’s in my config/application.rb:

Blogo.config.site_title = "SwingTradeBot Blog"
Blogo.config.site_subtitle = "Some clever subtitle..."
Blogo.config.keywords = 'stock trading, technical analysis, stock scanning'
Blogo.config.disqus_shortname = 'swingtradebot'
Blogo.config.twitter_username = 'swingtradebot'

Follow Your Favorite NFL Team on Your iPad in Flipboard

With N4MD’s new NFL coverage it’s simple to stay up-to-date on your favorite pro football team on your iPad. Simply add your team to your Flipboard favorites and you’ll be informed of all the important team news all season long.

Here’s how to add your team to Flipboard:

  1. Launch Flipboard and click the “+ More…” box or on the “More…” in the red ribbon in the upper right corner.
  2. That will open the “Add Content” page. This is where you can search for your team’s magazine. Type the appropriate search term for your team:
    – Type FanMag_Cards for the Arizona Cardinals.
    – Type FanMag_Bucs for the Tampa Bay Buccaneers.
    For all other teams type FanMag_YourTeamName. For example, FanMag_49ers, FanMag_Steelers, FanMag_Cowboys, etc.

    Then just tap the magazine which will appear in the search results (see the red arrow in the image below).

  3. The final step is to add that magazine to your Flipboard favorites. Do that by tapping the “Add” button at the top of the screen.

Lack of Indexes on Ultimate Tag Warrior Tables

Over the last week or so I’ve been on a mission to improve the performance of my web server, and especially MySQL. I took Arne’s advice and turned on the query cache. That helped but I still needed to do more. After doing some research I discovered MySQL’s slow query log, which does exactly what it sounds like. I enabled slow query logging and set “long_query_time” to 5 seconds. Shortly after I restarted MySQL the slow query count started to rise.

Every query in the slow query log was sent from the Ultimate Tag Warrior WordPress plugin which I use on my other blog. Here are some of the queries:

SELECT count( p2t.post_id ) cnt
FROM wp_tags t
INNER JOIN wp_post2tag p2t ON t.tag_id = p2t.tag_id
INNER JOIN wp_posts p ON p2t.post_id = p.ID
WHERE post_date_gmt < '2007-03-08 21:49:06' AND ( post_type = 'post' ) GROUP BY t.tag ORDER BY cnt DESC LIMIT 1 ;


SELECT tag, t.tag_id, count( p2t.post_id ) AS count, (
count( p2t.post_id ) /3661
) *100
) AS weight, (
count( p2t.post_id ) /1825
) *100
) AS relativeweight
FROM wp_tags t
INNER JOIN wp_post2tag p2t ON t.tag_id = p2t.tag_id
INNER JOIN wp_posts p ON p2t.post_id = p.ID
WHERE post_date_gmt < '2007-03-09 02:27:39' AND ( post_type = 'post' ) GROUP BY t.tag ORDER BY weight DESC LIMIT 50 ;

That led me to take a look at what was going on with the wp_tags and wp_post2tag tables. I did EXPLAINs on the queries and saw that they were doing table scans instead of using the indexes. So I went to look at the table definitions and was surprised at what I saw. The only index on the wp_post2tag table was rel_id, the auto-incremented primary key. So the columns that were actually used to do joins with, tag_id and post_id, had no indices. My SQL is very rusty but I knew that wasn’t a good thing. I also took a look at the wp_tags table and saw that it only had an index on the tag_id column. I’ve seen some queries with “tag = ‘tag_name’ ” in the WHERE clause so I figured that it would be good to have an index on the tag column as well.

After consulting with my brother, whose SQL skills are much more up to date than my own I decided to add indexes to those tables. I created an index called ‘tags_tag_idx’ on the wp_tags.tag column. On the wp_post2tag column I created two indexes — the post2tag_tag_post_idx index is on tag_id then post_id and the post2tag_post_tag_idx index is on post_id then tag_id. I’m not sure if using concatenated indexes is better than just creating separate single column indexes for each column but I think it’s the way to go after discussing with my brother and looking at how the wp_post2cat and wp_linktocat tables are indexed. They both have concatenated indices.

I ran some queries on the tables before and after to see if things were sped up and indeed they were. Unfortunately when I ran the EXPLAIN on the queries in the slow query log I saw mixed results. The keys that I added were now showing up as “possible_keys” and the actual keys but the queries still ended up doing table scans. For the tags table the EXPLAIN shows the dreaded “Using temporary; Using filesort”.

So while I didn’t completely solve my slow query problem the new indexes do help for many of the simpler queries which access wp_post2tag and wp_tag. If you’re using Ultimate Tag Warrior and are concerned about your database load you may want to add some indexes to the tag tables.

Technorati Beta

Check out the revamped (and well designed) Technorati. Here’s what’s new in the beta release:

  • We’ve improved the user experience, making Technorati accessible to more people and, specifically, people who are new to blogging. We’ve tried
    to make it very simple to understand what Technorati is all about, and make it easy to understand how we’re different from other search engines.
  • We’ve learned from the incredible success of tags, and brought some of the those same features into search, as well as expanding tag functionality. Now, if your search matches a tag, we bring in photos and links from flickr, furl, delicious, and now buzznet as well.
  • We now have more powerful advanced search features. You can now click the “Options” link beside any search box for power searching options.
  • We’ve added more personalization. Sign in, and you’ll see your current set of watchlists, claimed blogs, and profile info, right on the homepage, giving you quick access to the stuff you want as quickly as possible.
  • New Watchlist capabilities have been added. For example, you no longer need a RSS reader to watch your favorite searches. Now you can view all of your favorite searches on one page. Of course, you can still get your watchlists via RSS, and it is even easier to create new watchlists. You can also get RSS feeds for tagged posts, just check the bottom of each page of tag results!

Tabbed Browsing in IE 6

Too little, too late? A ploy to get you to install the MSN search toolbar?

Weeks after promising tabs in its upcoming IE 7 release, Microsoft made the long-awaited browsing feature available for IE 6 through its MSN toolbar.

With the version of MSN Search Toolbar made available Wednesday, IE 6 gains the ability to open numerous Web pages within a single window, each selectable by a small tab at the top of the window.

Interview with a Link / Comment Spammer

The Register interviewed a link spammer who revealed some of his methods and motivation. The bottom line — spammers can make up to seven figure incomes from some simple computer code. Some key points:

For even a semi-competent programmer, writing programs that will link-spam vulnerable websites and blogs is pretty easy. All you need is a list of blogs – which again, even a semi-competent programmer will be able to pull together (by searching for sites with keywords such as “WordPress”, “Movable Type” and “Blogger”) a huge list of blogs to hit.

And people like Sam are much more than competent. “You could be aiming at 20,000 or 100,000 blogs. Any sensible spammer will be looking to spam not for quality [of site] but quantity of links.” When a new blog format appears, it can take less than ten minutes to work out how to comment spam it. Write a couple of hundred lines of terminal script, and the spam can begin. But you can’t just set your PC to start doing that. It’ll get spotted by your ISP, and shut down; or the IP address of your machine will be blocked forver by the targeted blogs.

So Sam, like other link spammers, uses the thousands of ‘open proxies’ on the net. These are machines which, by accident (read: clueless sysadmins) or design (read: clueless managers) are set up so that anyone, anywhere, can access another website through them. Usually intended for internal use, so a company only needs one machine facing the net, they’re actually hard to lock down completely.

By this Sam means spammers setting up their own blogs, and referencing posts on zillions of blogs, which will then incestuously point back to the spammer, whose profile is thus raised. So what does put a link spammer off? It’s those trusty friends, captchas – test humans are meant to be able to do but computers can’t, like reading distorted images of letters. “Even user authentication can be automated.” (Unix’s curl command is so wonderfully flexible.)

“The hardest form to spam is that which requires manual authentication such as captchas. Or those where you have to reply to an email, click on a link in it; though that can be automated too. Those where you have to register and click on links, they’re hard as well. And if you change the folder names where things usually reside, that’s a challenge, because you just gather lists of installations’ folder names.”