Office of Secret Intelligence

Who taught you to be a spy, fucking Gallagher!?!

Introducing: Fluorescent

I've been working on a gem called fluorescent for a little while, and I just recently cleaned it up and released it.  This CMS is using it, so it seems to be working.

 

Basically, I wanted to be able to highlight search terms in search results and only show them in context for things like article bodies, which could potentially be several pages long.  Truncating them to a certain length is certainly beneficial.

 

Here's a brief rundown:

 

Add fluorescent to your Gemfile:

gem 'fluorescent'

Next, you need to pass your search results from ActiveRecord to fluorescent.

I do it like so:

Then, in my model:

Some very simple controller code:

 

How you display the results in the view is up to you.  I simply make sure the query parameter exists, then make sure there are actual results, and then iterate over the array of hashes:

 

I realize there are several points here that need to be brought up to the Rails 4 code standard, but I think these snippets get the point across.  I think it would be wonderful just to be able to call something like @results = fluorescent(@resultset) and have it just do the right thing, but that's down the road a bit in terms of development and complexity.

You'll also note that I keep track of the raw ActiveRecord resultset.  This isn't really used right now, but originally it was in case you needed to get to the non-hash serialized ActiveRecord row objects.

 

All thoughts and suggestions, feedback, and critique are welcome.  Please enjoy!


RE: Playing With Scala: Building a Small Web App with Play 2.4, Play-Slick and Postgres

Thanks for the great write up! I am setting up a webapp very similar to yours and was going through this version/documentation mismatch hell. You saved me a lot of time!


Treeify 0.04 Released

Amidst all this being busy stuff, I have released version 0.04 of Treeify (link to gem: https://rubygems.org/gems/treeify)

 

Here's a list of changes (noted in the Changes.md file):

 

(My changelog generation needs some work but hey! It's there.)

 

I haven't yet updated this blog to use the newest Treeify code, but I'll do that soon, I hope.  The biggest changes involved adding the ability to specify what columns to use in a query and actually have them retrieved.  I created some benchmarks here: https://github.com/dhoss/treeify/wiki/Benchmarks, though I don't know how accurate they are beyond saying one is faster than the other.  Although the previous method was faster, it did a lot less, and the new find_by_sql call can be fine tuned in the future.

On a side note, this article about generating Changelogs from git is awesome: http://brettterpstra.com/2014/08/03/shell-tricks-changelogs-with-git/, and so is this snippet: https://coderwall.com/p/5cv5lg/generate-your-changelogs-with-git-log

That's all for now, hopefully I'll get some more time tomorrow to write some more interesting articles.


RE: RE: Playing With Scala: Building a Small Web App with Play 2.4, Play-Slick and Postgres

Hey, thanks for the reply! Sorry for getting back to you so late, I don't have email notifications set up on here yet. I'm glad I could be of help. I'm hoping to finally get part three out this week some time as well, in case you're interested in following along.

Welp, that's a bug

I'm an idiot, and made the post body column a varchar(255).  That'll need fixing.


RE: Playing With Scala: Building a Small Web App with Play 2.4, Play-Slick and Postgres

Hi Devin,

I had a look at your build.sbt (here https://github.com/dhoss/steel/blob/master/build.sbt), and there are a couple of things you should change. First, remove the dependency to jdbc module. You are using the play-slick module for all database accesses, so you should not need to Play jdbc. In fact, as soon as you update to play-slick 1.0.0-RC2 (which was released last week, together with Play 2.4.0-RC3), you will get an exception similar to https://playframework.com/documentation/2.4.0-RC3/PlaySlickFAQ#A-binding-to-play.api.db.DBApi-was-already-configured. Second, update to play-slick 1.0.0-RC2 because RC1 had an annoying issue (https://github.com/playframework/play-slick/issues/245) that could prevent starting your app.

Otherwise, it's a good article. I'll see how I can clarify why you need both

slick.dbs.default.driver="slick.driver.PostgresDriver$"  slick.dbs.default.db.driver="org.postgresql.Driver"

in your application.conf. But, in a nutshell, the first is the Slick driver (which you will be using in your code), while the second is the JDBC driver which is going to be used by Slick backend. See the Slick documentation for DatabaseConfig http://slick.typesafe.com/doc/3.0.0/database.html#databaseconfig

Thanks for trying it out, and taking the time to write about your experience.


RE: RE: Playing With Scala: Building a Small Web App with Play 2.4, Play-Slick and Postgres

i am having the same problem

rovisionException: Unable to provision, see the following errors: 1) No implementation for play.api.db.slick.DatabaseConfigProvider was bound. while locating play.api.db.slick.DatabaseConfigProvider for parameter 0 at repo.StudentRepo.<init>(StudentRepo.scala:17) while locating repo.StudentRepo for parameter 0 at controllers.LoginController.<init>(LoginController.scala:24) while locating controllers.LoginController for parameter 6 at router.Routes.<init>(Routes.scala:47) while locating router.Routes while locating play.api.inject.RoutesProvider while locating play.api.routing.Router for parameter 0 at play.api.http.JavaCompatibleHttpRequestHandler.<init>(HttpRequestHandler.scala:200) while locating play.api.http.JavaCompatibleHttpRequestHandler while locating play.api.http.HttpRequestHandler

my build.sbt

 

name := """demo"""

version := "1.0-SNAPSHOT"

lazy val root = (project in file(".")).enablePlugins(PlayScala)

scalaVersion := "2.11.7"

libraryDependencies ++= Seq(
cache,
ws,
specs2 % Test,
"org.webjars" %% "webjars-play" % "2.5.0-1",
"org.webjars" % "bootstrap" % "3.1.1-2",
"com.adrianhurt" %% "play-bootstrap" % "1.0-P25-B3",
"com.typesafe.play" %% "play-slick" % "1.1.1",
"com.h2database" % "h2" % "1.4.187" ,
"org.postgresql" % "postgresql" % "9.4-1206-jdbc4",
"com.adrianhurt" %% "play-bootstrap" % "1.0-P25-B3",
"ch.qos.logback" % "logback-classic" % "1.1.3",
"com.typesafe.play" %% "play-slick-evolutions" % "1.1.1",
"com.typesafe.slick" %% "slick-hikaricp" % "3.1.1",
"com.typesafe.slick" %% "slick" % "3.1.1",
"org.seleniumhq.selenium" % "selenium-server" % "2.52.0",
"org.seleniumhq.selenium" % "selenium-firefox-driver" % "2.52.0",
"org.scalatest" %% "scalatest" % "2.2.1" % "test",
"org.scalatestplus" %% "play" % "1.4.0-M3" % "test",
"org.seleniumhq.selenium" % "selenium-htmlunit-driver" % "2.52.0"

)
javaOptions in Test += "-Dconfig.file=conf/test.conf"

coverageExcludedPackages :="<empty>;router\\..*;"

resolvers += "scalaz-bintray" at "http://dl.bintray.com/scalaz/releases"

// Play provides two styles of routers, one expects its actions to be injected, the
// other, legacy style, accesses its actions statically.
routesGenerator := InjectedRoutesGenerator


resolvers += "Sonatype OSS Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots/"



 


Playing With Scala: Building a Small Web App with Play 2.4, Play-Slick and Postgres

UPDATE: This isn't a full tutorial. I plan on doing this in pieces. The code shown here is one example of one piece of the app. I'll delve more into the code later, but I thought that the issues I encountered while getting up and going were more important to write about initially. I hope that's not too confusing.


 

 

It's been a while, but I finally have something interesting to write about.

 

I've been dabbling with the Play framework again after some time.  Work uses Java for a whole lot of stuff so I figured it'd behoove me to ease myself into the JVM again.  I don't know Scala very well, but after a foray into Go and C#, looking at it now makes a lot more sense than when I first looked at it.

I decided that I wanted to write a workout progress tracker.  I'm getting married in August and I made it a goal a while back to put on some muscle weight, and put on some weight on the bar.  I'd been struggling and struggling to get more weight up on the bar, and just couldn't be consistent.  Long story short, it turns out a lot of it was my diet, probably almost all of it (but that's another story).  My lifts started improving, I gained about 10-12 pounds, and I decided I wanted to see how far I've come since I started officially lifting again a few years ago.

 

A few things have changed since I last looked at Play.  For one, it uses the activator system to scaffold apps, run tests, install scala, etc.  It's a little confusing, but really it's not so bad.  I also had intended on using squeryl for my database object stuff, but it looks like it hasn't been updated in a while and I remembered the agonizing pain I went through just to get it to work at all the last time I did this. So, I went with slick, play's endorsed ORM FRM.  Specifically, I went with play-slick, which integrates slick directly into play itself.  

This was a little bit of a nightmare at first, because the tutorial for play initially sets you up with play 2.3.8 by default, and I wanted to be using the latest play-slick, as those docs seemed the most robust and up to date, which requires: 

  1. Play version 2.4.x
  2. Slick version 3.0.x
  3. Scala version 2.10.x/2.11.x

The issue I ran into, that frustrated me to no end, was the fact that the tutorials don't specify which specific versions of each you need.  I just looked through my commit history to try to find specifics, and I started convulsing, so long story short: if you're getting a lot of errors trying to find dependencies, make sure your build.sbt has at least a couple resolvers for the typesafe repository, and google the latest version for your dependency and try using that.  Here are my plugin.sbt and build.sbt for reference (I went ahead and made a gist so that they remain tied to this post's version, here's the repo in case you are from the future and want to see something that may be more up to date: https://github.com/dhoss/steel): https://gist.github.com/dhoss/1a824ee5397f22eec0ac.

ANOTHER NOTE BEFORE I GO FURTHER: I felt like a complete idiot for this, but adding a resolver in plugins.sbt doesn't necessarily mean it's going to be available in build.sbt.  I'm sure this is incredibly naive and dumb, but I spent a lot of time yelling at my screen trying to figure out why nothing was being pulled down from the repo I had just added.  I ended up just adding the resolvers to both build.sbt and plugins.sbt.  Like they say, "shoot 'em all and let God sort 'em out."

 

The last thing I really needed to get sorted was this nagging issue where my app couldn't connect to postgres.  Postgres was running, I could connect to it using the same username and password, but the app couldn't do it. Again, this was something minor that turned out to be in front of my face the whole time, but because I wasn't able to find a lot of documentation on slick driver connection configuration.  This one line was all that was standing between me and glory:

 This needs to go before a line that looks like this:

 

This really confused me for a while, so I can help keep others from this same frustration.

Moving on to some code.

Make sure you read through this and get everything set up first: https://www.playframework.com/documentation/2.4.x/Home 

I decided to go with a DAO set up for my database access, because the initial code I was following  is set up as such and it seemed reasonably simple and flexible.  You do have to write your CRUD methods, but I found it nice to be able to control exactly what you want those doing.

For starters, let's define our model.  In this case, we basically define our table structure in relatively broad strokes.

Simple enough.  Remember, you still need to write the actual SQL to create these tables with play evolutions.

Next, let's take a look at one of our DAOs.

We define our column methods, and a * projection that acts how would expect it would with SQL.  Next, we create a class that sets up the dbConfig, our query object (exerciseTypes), a method to map ids to names for option dropdowns, and our insert and list methods.  Fairly simple.

Per TDD, here is a small test that makes sure we can insert and retrieve things:

Nothing out of the ordinary here either.

So, this concludes things for now.  I've got a ways to go on my scala skills, and this app certainly has a bit to go as well.  I'm hoping to document the app's progress and post any "gotchas" I run into from here on out as well.

Next time, I'll try to demonstrate how to add a datatable for quick CRUD and sorting (Part 2: http://stonecolddev.in/posts/playing-with-scala-building-a-small-web-app-with-play-2-4-play-slick-and-postgres-part-2-testing)


RE: RE: Playing With Scala: Building a Small Web App with Play 2.4, Play-Slick and Postgres

Thanks! I was actually searching for what to call it but fell short. I'll update the article with that as well.


RE: Playing With Scala: Building a Small Web App with Play 2.4, Play-Slick and Postgres

By the way, Slick is a FRM (http://slick.typesafe.com/doc/3.0.0/introduction.html#functional-relational-mapping), not a ORM.


Quick and Dirty How To - Trees in SQL + Postgres + Rails 4

Intro

I'm going to give a quick rundown on how I implemented threaded comments in this app.

I'm using Postgres 9.4 (recursive queries have been available since 8.4), and Rails 4.

 

The Query

 

I wrote a little gem called Treeify, and (right now) it just gives us a little wrapper around some recursive SQL queries.  Here's the main method we are concerned with:

def tree_sql(instance)
"WITH RECURSIVE cte (id, path) AS (
SELECT id,
array[id] AS path
FROM #{table_name}
WHERE id = #{instance.id}
UNION ALL
SELECT #{table_name}.id,
cte.path || #{table_name}.id
FROM #{table_name}
JOIN cte ON #{table_name}.parent_id = cte.id
)"
end

 

This generates some SQL that ends up looking like this:

SELECT "posts".* FROM "posts" WHERE (posts.id IN (WITH RECURSIVE cte (id, path) AS (
SELECT id,
array[id] AS path
FROM posts
WHERE id = 7
UNION ALL
SELECT posts.id,
cte.path || posts.id
FROM posts
JOIN cte ON posts.parent_id = cte.id
)
SELECT id FROM cte
ORDER BY path)) ORDER BY posts.id

This does alright performance-wise, although I'd much rather not have the "IN" portion there and have it do a JOIN or something instead, as I believe that would be faster, but I digress.

So, moving on, we have a method called "descendents" which basically grabs all the desecendents for a given post:

def descendents
self_and_descendents - [self]
end

self_and_descendents simply grabs the whole tree, descendents just removes the root of the tree.  This gives us our tree of descendents, which ends up looking something like this (after a little bit of serialization - we'll get to that):

[{"id"=>20,
"title"=>"RE: testing",
"body"=>"<p>asfsafasd</p>",
"parent_id"=>7,
"user_id"=>1,
"created_at"=>Thu, 02 Oct 2014 20:04:45 UTC +00:00,
"updated_at"=>Thu, 02 Oct 2014 20:04:45 UTC +00:00,
"category_id"=>1,
"tsv"=>"'asfsafasd':3 're':1 'test':2",
"slug"=>"re-testing"},
{"id"=>21,
"title"=>"RE: testing",
"body"=>"<p>poop</p>",
"parent_id"=>7,
"user_id"=>1,
"created_at"=>Fri, 03 Oct 2014 02:01:17 UTC +00:00,
"updated_at"=>Fri, 03 Oct 2014 02:01:17 UTC +00:00,
"category_id"=>1,
"tsv"=>"'poop':3 're':1 'test':2",
"slug"=>"re-testing-4d35d96b-1c8b-4749-bf4b-052af7baf3cf"},
{"id"=>22,
"title"=>"RE: RE: testing",
"body"=>"<p>poop fart</p>",
"parent_id"=>21,
"user_id"=>1,
"created_at"=>Fri, 03 Oct 2014 02:01:28 UTC +00:00,
"updated_at"=>Fri, 03 Oct 2014 02:01:28 UTC +00:00,
"category_id"=>1,
"tsv"=>"'fart':5 'poop':4 're':1,2 'test':3",
"slug"=>"re-re-testing"},
{"id"=>23,
"title"=>"RE: RE: RE: testing",
"body"=>"<p>poop and fart</p>",
"parent_id"=>22,
"user_id"=>1,
"created_at"=>Fri, 03 Oct 2014 02:01:40 UTC +00:00,
"updated_at"=>Fri, 03 Oct 2014 02:01:40 UTC +00:00,
"category_id"=>1,
"tsv"=>"'fart':7 'poop':5 're':1,2,3 'test':4",
"slug"=>"re-re-re-testing"}]

Cool!  Our whole tree in one query.

But, it's not a tree, it's just a hash.  We need a tree or it will look really weird when we display it.  Let's fix that.

Let's create a method in our model called "build_tree".  We can pass it our results from our descendents method, which I do like so:

 def reply_tree
# give build_tree an array of hashes with
# the AR objects serialized into a hash
build_tree(descendents.to_a.map(&:serializable_hash))
end 

This just turns our descendents data into a serializable hash, which could be turned into JSON, or mangled more easily, like so:

 

def build_tree(data)
# turn our AoH into a hash where we've mapped the ID column
# to the rest of the hash + a comments array for nested comments
nested_hash = Hash[data.map{|e| [e['id'], e.merge('comments' => [])]}]
# if we have a parent ID, grab all the comments
# associated with that parent and push them into the comments array
nested_hash.each do |id, item|
nested_hash[id]['name'] = item['user_id'] ? User.find(item['user_id']).name : "Anonymous"
parent = nested_hash[item['parent_id']]
parent['comments'] << item if parent
end
# return the values of our nested hash, ie our actual comment hash data
# reject any descendents whose parent ID already exists in the main hash so we don't
# get orphaned descendents listed as their own comment
nested_hash.reject{|id, item|
nested_hash.has_key? item['parent_id']
}.values
end

Let's walk through this a little bit.

First, we want to turn our array of hashes into a nested hash, since we are dealing with tree data.

nested_hash = Hash[data.map{|e| [e['id'], e.merge('comments' => [])]}]

This casts the data variable (our array of hashes) as a hash, and maps each id to a the original hash (the comment data itself), and merges in a new key called "comments" that's assigned to an empty array.  This sets us up for our nested comments.

At this point, our data structure looks like this: 

{20=>
{"id"=>20,
"title"=>"RE: testing",
"body"=>"<p>asfsafasd</p>",
"parent_id"=>7,
"user_id"=>1,
"created_at"=>Thu, 02 Oct 2014 20:04:45 UTC +00:00,
"updated_at"=>Thu, 02 Oct 2014 20:04:45 UTC +00:00,
"category_id"=>1,
"tsv"=>"'asfsafasd':3 're':1 'test':2",
"slug"=>"re-testing",
"comments"=>[]},
21=>
{"id"=>21,
"title"=>"RE: testing",
"body"=>"<p>poop</p>",
"parent_id"=>7,
"user_id"=>1,
"created_at"=>Fri, 03 Oct 2014 02:01:17 UTC +00:00,
"updated_at"=>Fri, 03 Oct 2014 02:01:17 UTC +00:00,
"category_id"=>1,
"tsv"=>"'poop':3 're':1 'test':2",
"slug"=>"re-testing-4d35d96b-1c8b-4749-bf4b-052af7baf3cf",
"comments"=>[]},
22=>
{"id"=>22,
"title"=>"RE: RE: testing",
"body"=>"<p>poop fart</p>",
"parent_id"=>21,
"user_id"=>1,
"created_at"=>Fri, 03 Oct 2014 02:01:28 UTC +00:00,
"updated_at"=>Fri, 03 Oct 2014 02:01:28 UTC +00:00,
"category_id"=>1,
"tsv"=>"'fart':5 'poop':4 're':1,2 'test':3",
"slug"=>"re-re-testing",
"comments"=>[]},
23=>
{"id"=>23,
"title"=>"RE: RE: RE: testing",
"body"=>"<p>poop and fart</p>",
"parent_id"=>22,
"user_id"=>1,
"created_at"=>Fri, 03 Oct 2014 02:01:40 UTC +00:00,
"updated_at"=>Fri, 03 Oct 2014 02:01:40 UTC +00:00,
"category_id"=>1,
"tsv"=>"'fart':7 'poop':5 're':1,2,3 'test':4",
"slug"=>"re-re-re-testing",
"comments"=>[]}}

As you can see like I mentioned earlier, we have a hash with each comment's ID as the key and the value is the actual comment data.

Next step, we want to load up the sub-comments.

 nested_hash.each do |id, item|
nested_hash[id]['name'] = item['user_id'] ? User.find(item['user_id']).name : "Anonymous"
parent = nested_hash[item['parent_id']]
parent['comments'] << item if parent
end

This basically traverses the current hash and checks to see if the current node has a parent ID that matches an ID in the hash, and pushes that data into the 'comments' array.

This is what it ends up looking like:

{20=>
{"id"=>20,
"title"=>"RE: testing",
"body"=>"<p>asfsafasd</p>",
"parent_id"=>7,
"user_id"=>1,
"created_at"=>Thu, 02 Oct 2014 20:04:45 UTC +00:00,
"updated_at"=>Thu, 02 Oct 2014 20:04:45 UTC +00:00,
"category_id"=>1,
"tsv"=>"'asfsafasd':3 're':1 'test':2",
"slug"=>"re-testing",
"comments"=>[],
"name"=>"Devin"},
21=>
{"id"=>21,
"title"=>"RE: testing",
"body"=>"<p>poop</p>",
"parent_id"=>7,
"user_id"=>1,
"created_at"=>Fri, 03 Oct 2014 02:01:17 UTC +00:00,
"updated_at"=>Fri, 03 Oct 2014 02:01:17 UTC +00:00,
"category_id"=>1,
"tsv"=>"'poop':3 're':1 'test':2",
"slug"=>"re-testing-4d35d96b-1c8b-4749-bf4b-052af7baf3cf",
"comments"=>
[{"id"=>22,
"title"=>"RE: RE: testing",
"body"=>"<p>poop fart</p>",
"parent_id"=>21,
"user_id"=>1,
"created_at"=>Fri, 03 Oct 2014 02:01:28 UTC +00:00,
"updated_at"=>Fri, 03 Oct 2014 02:01:28 UTC +00:00,
"category_id"=>1,
"tsv"=>"'fart':5 'poop':4 're':1,2 'test':3",
"slug"=>"re-re-testing",
"comments"=>
[{"id"=>23,
"title"=>"RE: RE: RE: testing",
"body"=>"<p>poop and fart</p>",
"parent_id"=>22,
"user_id"=>1,
"created_at"=>Fri, 03 Oct 2014 02:01:40 UTC +00:00,
"updated_at"=>Fri, 03 Oct 2014 02:01:40 UTC +00:00,
"category_id"=>1,
"tsv"=>"'fart':7 'poop':5 're':1,2,3 'test':4",
"slug"=>"re-re-re-testing",
"comments"=>[],
"name"=>"Devin"}],
"name"=>"Devin"}],
"name"=>"Devin"},
22=>
{"id"=>22,
"title"=>"RE: RE: testing",
"body"=>"<p>poop fart</p>",
"parent_id"=>21,
"user_id"=>1,
"created_at"=>Fri, 03 Oct 2014 02:01:28 UTC +00:00,
"updated_at"=>Fri, 03 Oct 2014 02:01:28 UTC +00:00,
"category_id"=>1,
"tsv"=>"'fart':5 'poop':4 're':1,2 'test':3",
"slug"=>"re-re-testing",
"comments"=>
[{"id"=>23,
"title"=>"RE: RE: RE: testing",
"body"=>"<p>poop and fart</p>",
"parent_id"=>22,
"user_id"=>1,
"created_at"=>Fri, 03 Oct 2014 02:01:40 UTC +00:00,
"updated_at"=>Fri, 03 Oct 2014 02:01:40 UTC +00:00,
"category_id"=>1,
"tsv"=>"'fart':7 'poop':5 're':1,2,3 'test':4",
"slug"=>"re-re-re-testing",
"comments"=>[],
"name"=>"Devin"}],
"name"=>"Devin"},
23=>
{"id"=>23,
"title"=>"RE: RE: RE: testing",
"body"=>"<p>poop and fart</p>",
"parent_id"=>22,
"user_id"=>1,
"created_at"=>Fri, 03 Oct 2014 02:01:40 UTC +00:00,
"updated_at"=>Fri, 03 Oct 2014 02:01:40 UTC +00:00,
"category_id"=>1,
"tsv"=>"'fart':7 'poop':5 're':1,2,3 'test':4",
"slug"=>"re-re-re-testing",
"comments"=>[],
"name"=>"Devin"}}

 

We now have populated sub-comments.

The final step is to make sure sub-comments are only displayed in their respective array.

nested_hash.reject{|id, item| 
nested_hash.has_key? item['parent_id']
}.values

Iterate over the hash, rejecting anything that has a parent_id that exists in the top-most level of the hash, and return the values of the "good" keys.

Giving us:

[{"id"=>20,
"title"=>"RE: testing",
"body"=>"<p>asfsafasd</p>",
"parent_id"=>7,
"user_id"=>1,
"created_at"=>Thu, 02 Oct 2014 20:04:45 UTC +00:00,
"updated_at"=>Thu, 02 Oct 2014 20:04:45 UTC +00:00,
"category_id"=>1,
"tsv"=>"'asfsafasd':3 're':1 'test':2",
"slug"=>"re-testing",
"comments"=>[],
"name"=>"Devin"},
{"id"=>21,
"title"=>"RE: testing",
"body"=>"<p>poop</p>",
"parent_id"=>7,
"user_id"=>1,
"created_at"=>Fri, 03 Oct 2014 02:01:17 UTC +00:00,
"updated_at"=>Fri, 03 Oct 2014 02:01:17 UTC +00:00,
"category_id"=>1,
"tsv"=>"'poop':3 're':1 'test':2",
"slug"=>"re-testing-4d35d96b-1c8b-4749-bf4b-052af7baf3cf",
"comments"=>
[{"id"=>22,
"title"=>"RE: RE: testing",
"body"=>"<p>poop fart</p>",
"parent_id"=>21,
"user_id"=>1,
"created_at"=>Fri, 03 Oct 2014 02:01:28 UTC +00:00,
"updated_at"=>Fri, 03 Oct 2014 02:01:28 UTC +00:00,
"category_id"=>1,
"tsv"=>"'fart':5 'poop':4 're':1,2 'test':3",
"slug"=>"re-re-testing",
"comments"=>
[{"id"=>23,
"title"=>"RE: RE: RE: testing",
"body"=>"<p>poop and fart</p>",
"parent_id"=>22,
"user_id"=>1,
"created_at"=>Fri, 03 Oct 2014 02:01:40 UTC +00:00,
"updated_at"=>Fri, 03 Oct 2014 02:01:40 UTC +00:00,
"category_id"=>1,
"tsv"=>"'fart':7 'poop':5 're':1,2,3 'test':4",
"slug"=>"re-re-re-testing",
"comments"=>[],
"name"=>"Devin"}],
"name"=>"Devin"}],
"name"=>"Devin"}]

...a nice tree-like structure we can iterate over in whatever we choose for a view.  Disregard the extra "name"=>".." bits, I'm still working out how to best retrieve author data and am currently using a hacky and ugly method to do so.

 

That's all for now.  Hopefully this sheds some light on this sort of thing.  Some improvements right off the bat would be to put the nested tree construction in the treeify gem, and to make the SQL less clunky so we can mold it a little more, and get associated data easier (like author info). 


Playing With Scala: Building a Small Web App with Play 2.4, Play-Slick and Postgres: Part 2 - Testing

Hi all,  back again with the second part of the play scala+slick+postgres adventure.

In part 1, I touched on some of the hurdles I needed to overcome in order to get things up and running (UPDATE:.  the docs have been updated, and Mirco has done a really good job of making this more clear: https://www.playframework.com/documentation/2.4.0/PlaySlick).  In this article, I'd like to show you how to set up a quick and snappy testing environment.

 

For starters, we will want a separate test database for postgres.  For local tests, just manually create the database and user, and grant that user privileges to create tables, etc. on that database: 

Yes, I know, you don't need 3 psql calls to do this but it works and I'm extremely lazy.

Now, let's start looking at the bigger picture.  What if we wanted to set up continuous integration so we could have our tests run every time we git push'd?  That sounds good, so let's do that with Travis CI.  I'll leave it up to you to get your repository and account set up.  Once that's done, check out the next part.

Travis CI makes setting up CI testing with all sorts of things really easy.  Basically, you just create a file named ".travis.yml" with your configuration options, add it to your repo (I've only done this with github, I don't know how other git hosts work with Travis) and it takes care of the rest after you commit it and push.  Here's the one we're going to be working off of:

Very simply, we tell Travis we're using scala, java 8, specifying a shell script which will kick off our tests (source shown below), and that we want to use postgres and have it run the commands in before_script prior to running our tests.  Here's the script that runs our tests:

Briefly, this checks for an environment variable called STEEL_TEST_LOCAL being set to 1, which, if present, tells us we should nuke the current test tables so we can start clean for this upcoming test run.  Set this on your local environment so you have fresh and clean test tables prior to each test run.  Travis doesn't care about this since it tears everything down automatically at the end of the test.

The next line uses a tool called Flyway to migrate our database changes.  I actually really like Flyway, thusfar at least, because it was extremely easy to set up and start using.  It completely blows Play's evolutions out of the water.  Let's set that up really quick.

Add the Flyway dependencies and configuration to your build.sbt, like so:

The one thing here that needs explaining is the flywayLocations key.  To specify a path on the filesystem (local to where your tests are being run out of, so most likely your app root), you pass something like Seq('filesystem:path_to_sql_files').

So, we have flyway set up for migrations, the next step is to create some migrations and have them applied.  Flyway actually has excellent and extremely simple documentation on how to do this, and since we're using sbt to manage our tests, here's the introduction to do so: http://flywaydb.org/getstarted/firststeps/sbt.html.

Backing up slightly, I want to explain one more line in the build.sbt file:

javaOptions in Test ++= Seq("-Dconfig.file=conf/test.conf")

This is all you need to specify a different configuration file for your test environment.  This is really useful because you can just copy conf/application.conf, and change the database credentials to match your test environment, and it will automatically be picked up when your tests run (example here: https://github.com/dhoss/steel/blob/207ce910e668457c3870446322d98e94f5271f88/conf/test.conf#L40)

So, back to setting up our CI environment.  In our run_tests.sh script, we:

  1. check if we are running locally, and if so, nuke the test database
  2. otherwise, we run our migrations with flyway + sbt, and because flyway lets us do so, we override the default database configuration parameters and set them to our test's parameters.
  3. lastly, we finally call sbt to run ours tests with sbt +test

That's pretty much it.  I found it very simple to get a small, no-nonsense, test environment set up that transfers over quite nicely to a relatively robust CI solution. 

Next time, I'm planning on discussing some more application specific things dealing with getting your queries right with slick under this set up.


Caching, Caching, Caching and More Caching

So I updated the site code to do some more caching.  Things are going quite a bit faster between the Varnish ESI caching and the redis caching for fragments and such.

Still need to figure out how to get the gallery pages' images caching properly, but as long as the main page and reply trees are cached, the rails app itself will bear less of a burden.


Replies Are Enabled

So I've managed to get replies working to my liking.  I'll write more about the implementation later.


RE: Replies Are Enabled

Testing replies!


RE: RE: Replies Are Enabled

Testing replies to replies!


Uploads and Mina

Mina creates a new directory for each release, and does an ln -s to the shared directory in the root app directory (for logs, app server sockets/pids, etc).  This is great and all but it's a pain in the ass for uploads, especially with Carrierwave.  If I provide the absolute path to the directory I want to write to, it works okay.  It won't display properly though, because it uses the absolute path instead of the relative when it constructs the URL.  If I use the relative URL, images that have already been uploaded display properly, but it won't write to the directory properly, as in, it won't write to the symlinked shared/ directory.  I have no fucking clue why, so I'm just going to use Fog and S3 since it'll be much cheaper anyway.

 

This is a learning experience.


Good news, everyone!

After struggling around with mina and symbolic links, I decided just to go with S3.  Changing carrierwave around to use fog+s3 was a snap. 

I had originally had it set up that way, but I was trying to do dumb things with the filenames and it wasn't working properly.  Looks like things are doing ok now though.

In the next "patch" I'm going to fix this fucking tinymce editor.


Mina and Migrations

I've been using mina (https://github.com/mina-deploy/mina) for deployments, and overall it's pretty great.  However, I just tried to create this column type migration and it just blew right past it in the deployment.  I'm going to see if I can file a bug about it.

I went ahead and ran the migration manually, which blows, but hopefully I'll get this resolved.

On the plus side, I can now write more in my posts.


Kodiak - A CMS

I've been working on Kodiak for over a year.  

I finally got it to a point that it's reasonably tested, feature-ful enough to be useful, and convenient, and reliable and found a deployment solution that I like.

 


Links Work

I have enabled the full-featured TinyMCE editor for creating posts, so now I can spam links.  Most excellent!


Rails Asset Compression

I decided to give Google's Closure Compiler a try for this CMS.  I'm really happy with the amount of compression you can obtain (I don't personally have exact numbers, but I know my largest Javascript asset compressed down by at least half).  The one issue I have is the amount of time it takes (this link is a little old, but I can't find many new benchmarks comparing YUI and Closure: http://scoop.simplyexcited.co.uk/2009/11/24/yui-compressor-vs-google-closure-compiler-for-javascript-compression/).

 

This seems to speed page loading up a reasonable (at least noticeable) amount.  I'd like to get the load time for adsense and analytics externals down, but overall it's not horrible.  I'm going to set up Varnish for a reverse proxy cache and set up Redis for caching expensive database calls, once I've determined which are the most costly.


RE: RE: RE: Replies Are Enabled

Testing replies to replies to replies!


Look, I Got Featured!

A kind soul submitted Quick and Dirty How To - Trees in SQL + Postgres + Rails 4 to Postgres Weekly.  Whoever you are, thanks!

 

In that same vein, I have an issue opened on Treeify to add a readme and some examples, so I'll be writing up a post about how to use that within the next day or so.


Application Configuration Using Rails 4 + Postgres 9.4 and JSON

fart

Calculating AWS S3 Storage Usage With CSV+TextQL

fart


Treeify 0.03 Release

Treeify 0.03 has been released.  I'll put something together that generates a reasonable changelog in the future, but for now, the biggest changes are:

 

  1. An actual README, so you can sort of figure out how to use it.
  2. A new method called "descendent_tree" which returns an array of hashes in a nested format, resembling a tree structure which is mighty handy for passing to a Rails view or serializing to JSON and traversing with Javascript.

All tests pass, which is good enough for me now.  I'll test this new version with the Kodiak build running this site soon enough and hopefully it won't break too much.

 

Thanks to github user espen for opening the issue to create a README and motivate me to clean things up and get a reasonable release out.


RE: RE: Handling Increased Load

tesing


RE: Handling Increased Load

testing 


Handling Increased Load

On top of being submitted to Postgres Weekly, my rails 4 + postgres trees tutorial was submitted to Ruby Weekly, so there has been a pretty drastic increase in load as of late, which is great!

I'm using Pingdom and Google Analytics/Adsense to gauge where people are coming from and what kind of load this site is dealing with, and while not super substantial, I've tried to make some improvements to keep the site load times down around a few seconds (there have in the past been a few minute long spikes, probably due to an influx of readers).

What surprised me was a) how much caching even small database calls helps out in the long run, and b) the bottleneck STILL isn't at the database level, but mostly around serving assets. I've compressed the hell out of my CSS and Javascript, but not using progressive jpegs and setting appropriate caching levels (browser caching, etc) for them.

I mention images because it's been a little more difficult to flat out cache them like I normally would other assets as they are being served from S3 and have an expire time associated with them, and if the image url is cached beyond the expire time, the image has a really difficult time being displayed properly.

One of the things I've been looking at is progressive jpeg compression, where I convert everything that's not a gif to a jpg and strip out a lot of the profiles and compress things down a bit (quality can be reduced significantly, especially in thumbnails, without a significant loss in image quality) to reduce image size and thus allow for better respones times and lower bandwidth. Some of this is detailed in the CarrierWave documentation, the image processor I use.

Another option, after optimization, is to start using Cloudfront. I think the cost would be negligible, and it would pretty much handle all of the static asset caching and such, rendering anything on my side not necessary beyond Rails fragment and action caching. It's fascinating to begin to see the "macro" level that performance optimizations are needed on with web development.

I'm used to doing a lot of under-the-hood behind-the-scenes API work that involves optimization of processing data in one form or another, but not necessarily "let's make this entire page smaller so the request returns faster and only needs to be performed once every few minutes."

More on this soon!


Golang Application Namespacing

I'm no guru when it comes to golang intricacies, and this one actually put me off go a few times, but I've finally figured out how to have your library and instance application play together in the same directory for go apps.

 

Here's a little clarification:

 

This is the basic go app structure that you're told to use from day one (so sayeth https://golang.org/doc/code.html).  It is correct.

However, this doesn't really help address say, "what if I want my library that I wrote and my application using it to live in the same git repository?"

The problem I run into is some sort of namespace clashing, where go complains about "package main" and "package myapp" existing in the same place.

Long story short, my solution was to create an "app" directory underneath my library's root directory, like this:

This seems to do the trick for my purposes, allowing me to maintain my library and instance code in the same git repository while allowing everything to compile properly.


Treeify 0.05 Released

I've released version 0.05 of treeify, mostly just updating gems.  Here's the changelog:

Also, slightly embarassing, it turns out I need to update the gemspec file with the last edited date.  I thought that was the authored field.

 

Anyway, enjoy!


RE: RE: Playing With Scala: Building a Small Web App with Play 2.4, Play-Slick and Postgres

Wow! My first comments on this blog.  Thanks for taking the time to check this out.  

I'll be sure to make the changes you mentioned, and update the article as well.

 

Again, much appreciation for setting me straight, I've got lots more to write about in the coming weeks.


Back