A Quick Look at the Bailout Draft

With the upcoming expected passage of the bailout, I noticed that the draft text was online so I decided to take a look. There’s some thoughtful provisions that the first bailout plan was lacking, such as an attempt to prevent financial institutions from making an actual profit with plenty of loopholes so they don’t look terribly effective.

Unjust Enrichment

Consider this first attempt to prevent the companies helped, from actually getting a little ‘too much help’:

PREVENTINGUNJUSTENRICHMENT.—In making purchases under the authority of this Act, the Secretary shall take such steps as may be necessary to prevent un-just enrichment of financial institutions participating in a program established under this section, including by preventing the sale of a troubled asset to the Secretary at a higher price than what the seller paid to purchase the asset. This subsection does not apply to troubled assets acquired in a merger or acquisition, or a purchase of assets from a financial institution in conservatorship or receivership, or that has initiated bankruptcy proceedings under title 11, United States Code.

So the Treasury Secretary can’t buy up troubled assets for more than the seller bought them at (who knows how little they’re really worth), unless the assets were acquired in a merger or acquisition. Now, just think about how massive that loophole is… given how many of these financial institutions have been buying each other up. I really hope they don’t pay anywhere near full price for these assets, as they’ve dropped in value significantly since they were issued… and this attempt to stop them from overpaying for them seems rather half-hearted.

Transparency and Review

Unlike the original bill the Treasury wanted, this one sets up a oversight board responsible for reviewing the actions of the Secretary and ensuring there’s compliance with the rest of the act. The oversight board then has to report back to the Congressional Oversight Panel (they sure do love panels and boards). Reports covering purchases and justifications need to be sent to the Congressional Oversight Panel everytime the Treasury uses $50 billion dollars.

Not too shabby on oversight…. I wonder if the report will be put online so the people can see where their money is going?

Attempting to Reign in the foreclosures

A decent chunk of the bill includes provisions attempting to keep people in their homes, rather than foreclosing. This makes a hell of a lot of sense to me, as the more foreclosures that occur, more and more bailouts will be needed.

One of these bits allows for loan modifications of the mortgage:

(2) MODIFICATIONS.—In the case of a residential mortgage loan, modifications made under paragraph (1) may include: A) reduction in interest rates; B) reduction of loan principal; and C) other similar modifications.

Curbing executive compensation

I was actually surprised that lawmakers would do anything to offend the executives at these companies, who shell out so much money to lobbyists that inevitably comes back to the lawmakers, so I’m eying this section with skepticism as I’m sure there’s more than a few loopholes.

The standards required under this subsection shall be effective for the duration of the period that the Secretary holds an equity or debt position in the financial institution.

I’m curious how easy it is for the Treasury to buy troubled assets, and try and get out of a position that qualifies in this case…

Ah, and here’s one rather interesting limitation of the executive compensation, apparently it has nothing to do with how many millions the executive makes, but merely the top 5 executives for a company:

(3) DEFINITION.—For purposes of this section, the term ‘‘senior executive officer’’ means an individual who is one of the top 5 executives of a public company, whose compensated is required to be disclosed pursuant to the Securities Exchange Act of 1934, and any regulations issued thereunder, and non-public company counterparts.

I really don’t understand why they stopped at the top 5. How about restricting any executive (VP, Senior VP, Partner, etc) from getting those crazy incentives? Why not put a cap on everyone in the company on payment, the fairly reasonable one that they can’t make more than the top paid US Govt official (currently the President, at $400k/yr)?

The good news is that it does reign in some of the crazy executive perks to an extent, like the golden parachutes, and millions of bonuses. CNN reported that in the past few years, executive compensation has gone up 20%, while earnings for the companies went up 3%. Odd how the rest of the employee’s don’t see 20% raises….

The limits are also a little odd, it says that while the Treasury holds assets of the company, they will be in effect:

(A) limits on compensation that exclude incentives for executive officers of a financial institution to take unnecessary and excessive risks that threaten the value of the financial institution during the period that the Secretary holds an equity or debt position in the financial institution;

That’s great and all…. but I’m having trouble finding anywhere defining what these limits actually are.

More Transparency (Section 114)

Ah ha, a mere 39 pages in, a bit more on transparency. The Secretary has to make available to the public, in electronic form, full descriptions, amounts, and pricing of assets that are being bought, within 2 business days of the purchase.

If only the executive branch was a bit more transparent…. :)

How much is it really??

The first chunk of money the Treasury gets to spend, is $250 billion. Later, when its running low supposedly, the President can send a written certification to the Congress that it needs more, and the limit on outstanding money can be raised to $350 billion outstanding. After that, the Pres can write in again, and the amount can be raised to a limit of $700 billion outstanding. So in effect, its a $700 billion bail-out built without any actual data as to whether thats what it will take.

The remaining 30 pages is mainly details on which positions shall oversee who, how often, who’s part of them, etc. Nothing terribly exciting stands out here, nor did I find any explanation of exactly what limits on executive compensation are being applied. Was it just golden parachutes and bonuses?

CNN took a look, and apparently came to a similar conclusion in their reading that the companies aren’t allowed to write new ‘golden parachutes’ for their top 5 executives…. but the current contracts which may include golden parachutes are just dandy.

They also indicate that the companies will not be able to deduct the salary they pay to executives above $500k. Errr, “deduct the salary”? Not sure what that means, hopefully it means that they’re capped at $500k, but its hard to tell.

It still sucks

Overall, I’m still not happy with it. Nor are all these folks, who happen to be economists. There’s some good points in that letter as well:

In addition to the moral hazard inherent in the proposal, the plan makes it difficult to move resources to more highly valued uses. Successful firms that may have been in a position to acquire troubled firms would no longer have a market advantage allowing them to do so; instead, entities that were struggling would now be shored up and competing on equal footing with their more efficient competitors.

This bail out bill is definitely better than the prior one, but I think its still a waste of my taxpayer money. There’s zero guarantee it will work, zero guarantee the money will ever come back, and zero guarantee this will be the last bail-out of this magnitude we come across. While the bill has some measures to try and decrease foreclosures, most in the housing industry still believe the worst is yet to come

“We’ve been saying that the foreclosure trend has not yet peaked,” said Doug Robinson, a spokesman for the foreclosure prevention organization NeighborWorks America. “Before it was a subprime problem,” he said. “Now, it’s everybody’s problem.”

Ouch. So this bail-out only helps a few companies deal with the current problem. I really don’t want to know what will the Treasury ask the taxpayer to do next to stop the companies that hold the upcoming foreclosures from going bankrupt.

Ringtone sync FAIL

I’ve had an iPhone for about a month now, quite loving it. Today I decided to add some custom ringtones to it, so I went into iTunes and clicked the button to sync ringtones…

` <http://www.flickr.com/photos/45936054@N00/2887095052>`_

Sync FAIL

Sync FAIL

Really? They couldn’t find any way to add a few ringtones without removing (erasing) all the music I already dragged over? Really???

Pylons 0.9.7rc1 Release

Pylons 0.9.7rc1 was released a week ago, unfortunately I haven’t had time to actually blog it so better late than never. This is a big step towards the 0.9.7 release, and contains some major changes over 0.9.6 while still retaining a huge degree of backwards compatibility.

At this point, the thing I get asked the most is:

When will Pylons 0.9.7 be released?

So the short answer, when the new website and docs are ready. We’re going to a lot of effort to totally eradicate that old mantra that “Pylons has no docs”, and we’re doing it big. Most of the docs have already been updated, revamped, and moved to the new Sphinx doc tool (Take a look at the new Pylons docs).

The new website is nearing completion as well, and for those using the 0.9.7 release candidate, when posting a traceback you’ll get a link to it thats on the new beta website. Until then, 0.9.7 is feature-frozen and newer RC’s up to 0.9.7 are bug-fix only.

New Features

Pylons gets the substantial amount of its feature-set from the other Python libraries it uses, and here’s some of the new things these libraries have brought Pylons users:

This is a huge update, including safely escaped HTML builders, a literal object to mark strings as safe (vs unsafe) for use in templating languages, and a move away from all the old ported Rails helpers to new ones that in many cases have more features with less bugginess

  • Routes 1.9, with minimization turned off. This helps for more predictable route generation and matching which confused many, and in some cases led to hard-to-debug routes being created and matched. The new syntax available also breaks with the Rails’ish Routes form, and lets you easily include regexp requirements for parts of the URL.
  • Mako Automatic Safe HTML Escaping
  • Simplified rendering setup that doesn’t use Buffet
  • Simplified middleware setup with easier customizability
  • Simplified PylonsApp for customizing dispatch and URL resolving
  • and lots of bug fixes!

There’s a more detailed page covering 0.9.7 changes available as well that can also assist in the rather minimal change needed for a 0.9.6 project to get going with 0.9.7rc1.

Other things in Pylons-land

With TurboGears2 extending Pylons for its foundation, many various parts of TG2 have become usable within Pylons, not to mention existing packages that have been getting better and better.

ToscaWidgets has gotten drastically simpler, no longer requiring the rather confusing RuleDispatch package with its generic methods. This makes the tw.forms package install with a fraction of the packages it used to require, and since it comes with Mako templates won’t incur any speed bumps it used to have from its use of Genshi. The new Pylons tutorials for it also make it a breeze to quickly create large forms with advanced widgets.

Some might have noticed that Reddit released their source code, which happens to be in Pylons. Their code is a good example of some of the customizing possible with a Pylons based project, as they added some custom dispatching to make controllers work in a more similar fashion to web.py controllers that they ported their app from. In a way, its similar to how TG2 has been able to support TG1 users for the most part by customizing Pylons to dispatch in a TG1 style manner.

Profiling an application got a lot easier with repoze.profile, and I’m sure more cool bits of WSGI middleware will be coming out of the repoze project in the future, not including some of the past handy bits like repoze.who which is used in TG2 for its new identity system.

I ported a little app that Robert Brewer wrote to track memory leaks. Being terribly uncreative on names for my new WSGI middleware version, I called it Dozer. It’s a handy little piece of WSGI middleware to throw in when you think you might have a memory leak to try and sort it out.

Pylons is moving along quite nicely, and the amount of WSGI middleware and tools that work with it continue to expand which makes it hard to list all the cool new projects I’ve seen lately that work wonderfully with Pylons.

Mako and SQLAlchemy continue to evolve with Mako having pretty much zero backwards incompatible changes in the past 6+ months, while SQLAlchemy slowly deprecates things as they prepare the 0.5 release. These packages have massive amounts of features and are rapidly becoming very stable easily making Pylons + Mako + SQLAlchemy a tough combination to beat.

Routes 1.9 Release

I released Routes 1.9 today, which is another step on the Road to Routes 2.0. Some of the highlights that people will be most interested that I had previously blogged about now available:

Minmization is optional

Pylons 0.9.7 will default to turning minimization off (projects are free to leave it on if desired). This means that constructing a route like this with minimization off:

map.connect('/:controller/:action/')

will actually require both the controller and the action to be present, and the trailing slash. This addresses the trailing slash issue I wanted to fix as well.

Named Routes will always use the route named

This is now on by default in Routes 1.9, which results in faster url_for calls as well as the predictability that comes with knowing exactly which route will be used.

Optional non-Rails’ish syntax

You can now specify route paths in the same syntax that Routes 2 will be using:

map.connect('/{controller}/{action}/{id}')

Or if you wanted to include the requirement that the id should be 2 digits:

map.connect('/{controller}/{action}/{id:\d\d}')

Routes automatically builds the appropriate regular expression for you, keeping your routes a lot easier to skim over than a bunch of regular expressions.

Routes 2 will be bringing redirect routes, and generation-only routes, making Routes 1.9 a great way to transition to Routes 2 when its ready.

Pylons on JVM’s (and other VMs)

Phil Jenvey has been making some great progress getting all the components of Pylons running on Jython, and posted a good write-up of the remaining work being done. It’s interesting to note that one of the big issues will affect any web framework on Jython, not just Pylons. That is, the reload time when used in development to restart the server.

While I don’t plan on deploying Pylons apps in WAR files anytime soon, its nice to see Jython emerging as a candidate for deployment.

Most bizarre Git service and other stupid Rails powered “businesses”

I can’t help but get totally baffled when I see a business model like this.

Yes, that’s right, you can pay for the privilege of keeping a copy of your distributed version control system (DVCS) private repositories on someone else’s machines. You also get to pay depending on how many people you want to allow to collaborate on it.

Nevermind that one of the entire points of a DVCS is that you do NOT need a central repository. Does anyone actually work at a “Large Company” (as the page indicates) that would be stupid enough to pay $100/month so they can put all their proprietary and very personal code repositories on a third party web service?

So what are you paying for? Well, to start with, they have awesome integration with Lighthouse, since we all know there’s no decent free open-source issue tracking system… cough trac cough roundup cough. Oh wait, since there’s absolutely no simple web-based issue tracking systems, let’s have another slick business model to get people to pay for a stripped down Trac (but this time with a really pretty UI)!

What do these sites have in common? Rails, “look ma, I can copy-paste the business plan too” pricing models, and some good graphic designers at the helm. There also seems to be an interesting amount of promotion between these sites, as well as a nice blog post from the Rails creator himself promoting GitHub. I’m sure no one who has read this rant should be surprised though.

I only hope that no one starts to believe that a DVCS actually requires these “please pay” copies of their DVCS repo.

Update (11/12/2008): This post is apparently popular enough to come up on occasion several times now, so I thought I’d clarify a bit more.

Many people have suggested the obvious benefits of services like GitHUB, and I’ve used one just like it myself, BitBucket. These sites are great for open-source projects as many have rightfully pointed out, they make it easy to collaborate and fork projects, and easy for maintainers to pull patches from forks after looking them over.

Most of their social-network features become moot though when working on company code thats not open-source, (note that this rant is directed entirely at the paid service options which are for private repos). None of the companies I’ve worked at would ever let their private source code leave their own servers. Since you need to deploy a site anyways (many times to a remote computer), which will generally require ssh access, its trivial to use the modern DVCS’s over ssh…. which makes it seem very silly to me to be paying so much to another company for a bunch of useless social features for a private repo.

Part of the original humor intended in this rant was that a centralized repo hub has become one of the stronger selling points for a distributed VCS system. Unfortunately many seemed to have missed that point.

Google Datastore and the shift from a RDBMS

So many random musings and theories on Google App Engine, I won’t bother musing about it myself, except to mention that Ian Bicking put together instructions for running Pylons on it. These also work fine for using the latest Pylons 0.9.7 beta.

I got Beaker, the session and caching WSGI middleware that Pylons uses, running fine on Google now, using Google Datastore as the backend. Diving into the Datastore docs to get a grip on what’s the best way to implement it shed some light on the transition any developer thinking about writing data-backed apps for GAE (Google App Engine) will need to tackle.

Some notes on terminology, Google has Entities, Kinds, and Properties. These correspond roughly to Rows, Tables, and Columns in RDBMS-speak. Kinds can also be called classes, because in the Python API, you create a class and inherit from the appropriate datastore class. Entities may also be referred to as instances, since performing a query returns a list of objects (instances).

Sessions and Datastore

First, regarding sessions. Beaker will now let a Pylons app use normal sessions on GAE, the real question is, should you?

The Google User API makes it trivial to get currently logged in user, and the datastore comes with a property type for a ‘table’ that is specifically made for a Google user account reference. So with just one short command, you can have an entity from the Datastore that corresponds to a given user, ie:

userpref = UserPrefs.all().filter('user =', users.get_current_user()).get()

The Datastore is blindingly fast for reads and queries, so there’s a compelling reason to ignore sessions altogether and just fetch the appropriate preferences or what-have-you. This leaves people with the normal reason for wanting more, ie, a session, “But wait, I want to stash other little things with the user when they run around my app!”. Not a problem.

Google’s Datastore has an Expando class for entities that lets you dynamically add properties of various types. It’s like having a RDBMS where you can just add columns to each row, on the fly. The dynamic_properties() entity method makes it easy upon pulling an object, to see what dynamic properties were already assigned.

As far as I’m concerned, this pretty much mitigates the need for a session system. If you didn’t want to require user login, you could always make a little session ID yourself, and keep that on the UserPrefs table as a separate property, then query on that.

Rethinking how you store/query/insert data

Going slowly through all the Datastore docs and especially reading some of the performance information people were drumming up on the GAE mail list brought up a number of issues with how people with RDBMS backgrounds approached Datastore. Many of the table layouts I saw pasted on the mail list were clearly written for how an RDBMS works, with sometimes significant work required to adapt it to deal with Datastore.

A little background might help understand this difference. Google Datastore is implemented on top of BigTable, which is described briefly in the paper as a “sparse, distributed, persistent multi-demensional sorted map”. One of the other descriptions I heard in a talk on data storage techniques at FOO Camp from a Google developer was, “think of a BigTable table as a spreadsheet, except with pretty much as many columns as you want”.

This brings about a fairly big shift in thinking for the developer who grew up on an RDBMS. The fairly normalized organization of data written without regard to massively distributed data stores suddenly becomes a rather big problem. Consider a few of the ‘limitations’ of Datastore that will jump right out at you:

  • You cannot query across relations
  • You cannot retrieve more than 1000 rows in a query
  • Writes are much much slower than you’re used to (a developer on the mail list said 50 inserts with 2 fields each almost ate up the 3 seconds allowed for a web request)
  • There are zero database functions available
  • There is no “GROUP BY…”, which doesn’t matter much if you read the prior bullet point
  • Transactions can only be wrapped around entities in the same entity group (ie, the same section of the distributed database)
  • Referential integrity only sort of exists
  • No triggers, no views, no constraints
  • No GIS Polygon types, or anything beyond just a GeoPoint (Odd, considering that Google has so much mapping stuff)

Then of course, a few of the new things that might leave you scratching your head, quite happy, or both:

  • Keys for an entity may have ancestors (ancestors aren’t relations, they’re different and have to do with Entity Groups, which determine what you can do in a transaction, wheeee!)
  • An Entity Group doesn’t have to all be of the same Kind, its more of an instruction to Datastore to keep these near each other when distributed
  • Key’s can be made before the entity, just so you can make descendent entities of the key, then make the ancestor
  • The handy ListProperty, when used in a query, will let you use the conditional argument and apply it to every item in the list (sort of like an uber ‘IN (…)’ query, except it can also find all the data where a member in the list was , or = to something else)
  • Making more Entity groups is a good idea when you frequently need a batch of “these few things” for a request, especially if you need to alter them all at once in a transaction
  • Normalizing is frequently bad since you can’t query across relations, dynamic properties make it easy to heavily denormalize. If you do normalize some data and its for the same batch of ‘things you always need at once’, use Entity groups. Or use a ReferenceProperty if its merely something related you may occasionally hit.
  • The ReferenceProperty() does not have to refer to a known kind, you can decide on the fly what datastore classes to reference if not specified when declaring the ReferenceProperty
  • Many to Many relations aren’t what you think, now you could have a ListProperty() of ReferenceProperty()’s, which may or may not all refer to instances of the same class
  • A query may return entities of different kinds, if querying for entities of a given ancestor

(There’s probably a bunch more as well, these were some of the obvious ones that jumped out at me)

The end result of this, is that the standard way a developer writes out the table schema for a RDBMS should be dumped almost entirely when considering an app using Google Datastore. Storing data and using Google Datastore isn’t difficult, but it is a pretty hefty paradigm shift, especially if you’ve never left RDBMS-land. This is not a trivial change to make in approaching your data.

I rather enjoyed working with these new ways of tackling data, and the possibilities opened by the ways it lets me store and refer to data in many ways goes beyond the traditional RDBMS. In the short term though, I doubt I’ll be making any GAE app’s until there’s an alternative implementation thats production ready… I just can’t handle the lock-in.

And of course, please note any corrections or inaccuracies in the comments.

Where’s the Capistrano knock-off for us Python web devs?

Rails, and Ruby in general has had Capistrano for awhile now to help with the task of deployment and automating builds for servers, and even clusters of servers. Where is something like this for Python?

Now, before people note that I could easily use Capistrano for my Python project, I should note that it is rather annoying having to install yet another language. On the other hand, given that I will likely only need to install it on my development machine (which running OSX already has Ruby… and gems), it doesn’t seem too horrible to just use Capistrano and be done with it.

However, Capistrano doesn’t quite manage the Python egg’s, and the task isn’t exactly trivial. zc.buildout, which I previously ranted about due to odd docs does the management pretty well. It even results in a rather consistent build experience no matter where it occurs. Two commands, and boom, the app is ready to go.

Unfortunately, life isn’t quite that easy. When something does go wrong with buildout, trying to track it down can be exceptionally hairy. Having a tool so ‘magical’ as I’ve heard some describe it, carries its own penalties when things fail. Buildout also fails to automate the task of deploying the app itself to the other machine, which is still a manual process. It does manage egg’s rather well, though it does some very odd mangling of sys.path to accomplish this in every script.

I don’t need something as full featured as Capistrano, but I’d love to see something that has no more requirements than I’m already depending on (Python), that can handle the task of easily automating deployment of a Python application – including ensuring all the proper versions of the eggs I want are used – on a remote *nix machine. I recall seeing a post (I think by Jeff Rush) awhile back, on a system just like this that he unfortunately never released. Vellum also looks like it could be hacked further to do this task…

Is there some build/deployment tool that is just Python that I’ve missed? Something that will let me setup a script for some commands on how to deploy my app on another server and setup (hopefully in a virtualenv) the webapp so its ready-to-run (and optionally restart it/migrate the db/etc :)?

MarkMail now indexing Pylons-discuss/devel

I’m thrilled to announce that MarkMail now indexes the Pylons-dicuss and Pylons-devel mail lists. For those looking for a great way to search and browser the Pylons mail lists, the MarkMail interface is top-notch.

For those looking for detailed Pylons docs…. there’s some very exciting developments in the documentation front coming up shortly that will make you rather happy. :)

Sacrificing readability for automated doc tests

I’ve tried several times in the past to try out zc.buildout, a fairly neat sounding package that automates the buildout process for a Python app. The promise of fairly easy to write recipes that can setup external processes like nginx in addition to ensuring my webapp is put together with all the things it needs sounded great.

It occurred to me that the docs definitely didn’t help at all. In fact, they’re noticeably bizarre unless you actually realize why they’re written the way they are. Here’s a sample of the zc.buildout docs about how to make a new buildout and bootstrapping.

You’ll notice that it almost looks like command line interactions of some sort are occurring, yet the author of the docs is clearly at an interactive Python prompt. Note that none of the commands shown there will work if you copy them into your Python interpreter, nor is there any indication what you would need to do to get such commands available. As a user trying to follow the docs, that leaves me wondering… am I supposed to be in a Python interpreter? What do these variables get expanded to so that I can do that at my shell prompt? Why can’t you just give me the damn command line I’m supposed to run so I can copy/paste???

Yes, it definitely got me a bit frustrated. I believe the only logical reason the docs were in this bizarre fashion is so that they could be automatically doc-tested. Its a shame that the result of this is docs that make me want to close the web page as soon as I stumble upon the ‘samples’, since there’s no way I can handle wading through the command line abstractions.

Doctests can be useful, but turning command line interactions into a Python interactive session is a massive readability issue. People know and recognize command line interactions, lets stick with them please.