Friday, November 21, 2014

Thoughts on the coming storm

From a text exchange I had on election night
The press has gone from
"The Republicans are the responsible party"
"Both parties are irresponsible"
"The Republicans will start being responsible after they win"
Whatever they are going to say after the impeachment.
[voice recognition errors corrected.]

This must be an interesting time to be a political scientist or anyone studying the way institutions form, function and fail.

The  Republican party seems locked into a course that defies conventional political explanation. I don't see any way that this fight over this issue is a winning move for the GOP. I am inclined to agree with Josh Marshall's analysis:
It all adds up to an intense and likely toxic campaign fracas in which a lot of people will have a unique and intense motivation to vote. That will apply to people on both sides of course. But the anti-immigration voters vote consistently almost every cycle. And as intense as your animus is toward undocumented immigrants, it's hard for it to compare to the motivation of voters who directly know someone who will be affected. And that latter group has far more 'drop-off' or occasional voters.

This isn't getting mentioned a lot right now. But behind the headlines I suspect it's one of the key reasons Republican elites are upset that this might happen: because it's an electoral grenade dropped right into the heart of the 2016 campaign.
Of course, the standard line at this point is to say something about the leaders of the party losing control of the base, but I don't buy that -- at least not in the way it is generally framed. For one thing, the underlying political philosophy of the base and the leaders doesn't seem that different, and where there are differences, they seem to mostly come from the base actually believing the message crafted by the party elites.

Keeping in mind that they decisively won the last election, the Republicans still have big problems with information and coordination. That makes it more difficult for the party to make decisive rational moves that promote its self-interest and instead leaves it inclined to seek catharsis. Shut down and impeachment are about emotional release. The challenge for the party leadership is convincing their followers that there's something more important than that.

Thursday, November 20, 2014

Other than stem cells...

What are the most notable examples of regulation holding back new technology? There has been a lots of talk recently about encouraging innovation through deregulation zones. The idea being that, for example, having a city with no regulation on drones will spur a great deal of research into the technology. On one level, this does make a certain amount of sense. The easier it is to do research, the more research we expect to see.

That said, other than studies with human subjects (where the rules really can have a dampening effect), I can't think of an area where regulations are clearly having a big negative impact on research. When a technology is promising and well-funded (as with drones), companies don't seem to have that much trouble working with the rules.

I assume I'm missing some obvious example. Any ideas?

Wednesday, November 19, 2014

"Duct tape and string"

Or as we used say back in the hills, spit and bailing wire.

From James Kwak's recent piece on United Airlines:
There are two lessons to be drawn from these entirely unexceptional examples of air travel gone wrong. One is that United’s computer systems don’t work — for the same reasons that many large companies’ core systems don’t work. The overnight unbooking and rebooking was probably a computer error, and in any case United had no way of rolling back all the automated changes to its reservation system. The automated cancellation of my return flight was either an incompetent customer service representative who didn’t preserve my return reservation when I asked her to, or a computer system that didn’t give her any way of preventing the cancellation. I was downgraded from first class because some marketing genius at United decided to add a new upsell feature to the website — but no one bothered to extend the legacy system they use behind the scenes to capture the new data from the ticket sales process. (This is a common problem with enterprise software these days: companies build new features in their websites but can’t integrate those features properly with their core processing systems.) All of this just reinforces a point I’ve made several times before: the computer systems holding together the world’s largest companies are held together by duct tape and string.
I've got at least a couple of posts I'd like to write on the how bad this side of the business often is. Having seen some of these systems up close, I'm surprised things don't crash and burn more often.

Tuesday, November 18, 2014

A subtle issue with standardized tests

This is Joseph.

Dean Dad has a nice piece on assessment.  A part of it that jumped out was:

Johnson’s argument is subtle enough that most commenters seemed to miss it.  In a nutshell, he argues that subjecting existing instruction to the assessment cycle will, by design, change the instruction itself.  Much of the faculty resistance to assessment comes from a sense of threatened autonomy.  Johnson addresses political science specifically, noting that it’s particularly difficult to come up with content-neutral measures in a field without much internal consensus, and with factions that barely speak to each other. 

He’s right, though it may be easier to grasp the point when applied to, say, history.  There’s no single “Intro to History” that most would agree on; each class is the history of something.  The ‘something’ could be a country, a region, a technology, an idea, an art form, or any number of other things, but it has to be something specific.  Judging a historian of China on her knowledge of colonial America would be easy enough, but wouldn’t tell you much of value.  If a history department finds itself judged on “scores” based on a test of the history of colonial America, then it can either resign itself to lousy scores or teach to the test.
This means that the design of standardized test is crucially important if students and/or teachers are going to be evaluated on them.  For some subjects, e.g. basic math, this may be less controversial but it still involves making choices about what the emphasis will be.  A perfect test is like a perfect teachers -- neither beast really exists in nature. 

But this is critically important for high stakes tests, because what is taught cannot help but be influenced by the test.  If history questions on the high stakes tests are all focused on colonial America, guess what the history section of classes will look like.  In some sense that is okay, insofar as we have a broad consensus as to what should be taught.  But it does make the content of the tests a matter of public policy and concern as much as any other aspect of school instruction.

Monday, November 17, 2014

James Boyle's devastating take down of Robert Bork

What makes this piece so effective is Boyle's refusal to dismiss Bork as a crank or a charlatan. Boyle instead insists on treating Bork as an important figure in conservative thought. It would have been easy to lapse into mockery, but by starting from the explicit assumption that Bork's ideas are worth taking seriously, Boyle is left with an obligation to examine them in painful detail.

From A Process of Denial: Bork and Post-Modern Conservatism

by James Boyle

With this range of defects it is hardly surprising that Mr. Bork chose to shift his ground somewhat. In The Tempting of America he argues that the understanding of the public at the time the Constitution was ratified, rather than the intent of its original authors, should determine its meaning. There is obviously a price to pay for making this change. The best thing about the intent of the framers was that it appealed to the unreflective idea that a document must always mean exactly what its authors meant it to -- no more and no less. The practitioners of original intent can claim with superficial plausibility that their method is the one "natural" way to read the text. They can even claim that we often (though not always) read other legal documents this way -- trying to determine what Congress, or the judge, or the administrator meant by this word or that phrase. Original understanding has less unreflective appeal. Precisely because it is a more sophisticated notion of interpretation, it sacrifices the idea that this is the only credible way to read a text (what about what the words mean out of context, or what the author meant?) the appeal to everyday practice and perhaps even the claim that this is the way we read other legal documents.

This problem is a particularly acute one for Mr. Bork. Throughout The Tempting Of America he explicitly connects his struggles to those going on within other disciplines. As well he might. Most disciplines seem to have rejected the idea that the text can only be read to mean what the author intended. Literary critics and historians have added other methods of reading. How would the text have been understood by its audience at the moment that it was written? How would an audience today understand it? Can the text be illuminated by evidence of the author's subconscious desires or conflicts? How does the text read if we take it as an a-contextual attempt at philosophical argument?

These other methods are referred to collectively (and a little pretentiously) as "the reader's revolution against the author." They represent everything that Mr. Bork finds most reprehensible in today's scholarship. He quotes approvingly a letter from intellectual historian, Gertrude Himmelfarb attacking this impermissible openness to other methods of interpretation. "Any methodology becomes permissible (except of course, the traditional one), and any reading of the texts becomes legitimate (except, of course, that of the author)." (p. 137) If Mr. Bork was still claiming that constitution meant what its authors intended, this would be all well and good. But the trouble with Mr. Bork's revamped and sophisticated version of originalism is that it can no longer appeal to the romantic idea that the imperial will of the author must govern the text. "The search is not for a subjective intention." (p. 144) Instead, he has handed over interpretive competence to the historically located readers of the constitution. For reasons we can only speculate about, he has shifted ultimate interpretive authority from the Framers of the Constitution to the "public of that time." Mr. Bork has joined the reader's revolution.

As I pointed out before, this switch is a costly one for Mr. Bork. To the initial cost of having been seen to adopt the very same methodology so often criticised by conservatives in other academic disciplines, one also has to add the cost of having been seen to change from one dogmatically asserted position to another. Mr. Bork obviously feels this one particularly strongly because he denies having done it. Though he described himself during the hearings as "a judge with an original intent philosophy"(61) and argued in print that "original intent is the only legitimate basis for constitutional decision-making",(62) he says in The Tempting of America that "[n]o even moderately sophisticated originalist" believes the Constitution should be governed by "the subjective intent of the Framers." (p.218) He suggests that no-one could ever have held such a belief, because it would necessarily mean that the secretly held beliefs of the Framers could change the meaning of the document. Thus all (moderately sophisticated) originalists must have believed in original understanding all along. This seems like a red herring. There are many varieties of intentionalism and many varieties of "reader-controlled" interpretation. But allowing the intention of the author to control interpretation is fairly obviously not the same thing as allowing the understanding of the reader to control. Expanding the definition of intentionalism does not turn it into the philosophy of original understanding. The `intention of the Framers and ratifiers' is not the same as `the understanding of the American people at the time.' Mr. Bork seems to find it hard to admit the change.

The most interesting example of Mr. Bork's scholarly method is the point in The Tempting of America he takes sections from his 1986 article The Constitution, Original Intent, and Economic Rights(63) which, as one might suspect from the title, defends original intent, and uses those sections to defend original understanding. At first glance, it appears that he does this by finding the words "original intent" wherever they appear in the article, and simply replacing them by "original understanding." Chunks of text which had reproved Paul Brest with failing to understand that the original intent determines the meaning of the 14th Amendment, are edited, expanded upon, a new philosophy of interpretation inserted. With a quick change of key words they can become reproofs to Paul Brest for failing to understand that original understanding determines the meaning of the 14th Amendment.(64) Even the same counterarguments can be pressed into service. In 1986 for example, "[t]here is one objection to intentionalism that is particularly tiresome. Whenever I speak on the subject someone invariably asks: "But why should we be ruled by men long dead?"(65) In 1990, Mr. Bork finds that "[q]uite often, when I speak at a law school on the necessity of adhering to the original understanding, a student will ask, "But why should we be ruled by men who are long dead." (170) In the era of the word processor, this kind of "search and replace" jurisprudence has its attractions. Still, both the interpretive criteria and the identity of the `dead men' has changed, and Mr. Bork seems uneasy with that fact.(66)

Saturday, November 15, 2014

One of these days I'm going to do a post on genres as fitness landscapes

In the meantime, here's a completely unexpected but surprisingly effective reworking.

Friday, November 14, 2014

What do stock buyback actually do?

Barry Ritholtz passes along an interesting thought from Aswath Damodaran, a professor at New York University.
Before a company calls for a stock buyback, it has risky assets (its operating business) and riskless assets (cash). After the buyback, the company has less of its riskless asset (cash) but also has fewer outstanding shares.

Hence, we end up with a somewhat riskier stock. Damodaran argues, rationally, that a buyback by an all-equity funded company should be a value-neutral transaction. In other cases, the shift should be reflected in by assigning the company a somewhat lower price-earnings ratio.
I don't know enough to comment intelligently on this claim, but it does seem to indicate that, as with so many other stories, the impact of buybacks is considerably more complicated than the experts on CNBC would have you believe,

Thursday, November 13, 2014

Fixing Common Core (or at least a small part thereof) over at the teaching blog

With a nod to David Coleman, last week I did a post called "Deconstructing Common Core" focusing on the homework problems going out under the Common Core banner.

[The generally unproductive question of what is and isn't Common Core comes up frequently. Hopefully, having an actual copyright notice will keep us from wasting any more time on the subject.]

I've become increasing concerned about the direction of mathematics education. Here's a big part of the reason:
I volunteer a couple of times a week to help a group that tutors kids from urban schools. My role is designated math guy. I go from table to table helping kids with the more challenging homework problems.

Recently, I have noticed a pattern in helping with Common Core problems. First I explained them to the students, then I explained them to the tutors.

That may be the most noticeable difference between the mathematics of Common Core and the new math of the 60s. In the summer of love, an advanced degree in mathematics or engineering was sufficient to understand an elementary school student's homework. These days, the tutors with math backgrounds often find themselves more confused than their less analytic counterparts since what they know about solving the problem seems to have nothing to do with what the assignment asks for.

To follow a Common Core worksheet, you really need to have a little knowledge of the underlying pedagogical theories. Unfortunately, if you have more than a little knowledge, you'll find these worksheets extraordinarily annoying because, to put it bluntly, much of what you see was produced by people who had a very weak grasp of the underlying concepts.
I thought it might be of interest to walk through the process of 'fixing' these problems, showing how, with a few changes, these confusing and ineffective problems could be greatly improved.

I used an example of a Common Core problem that went viral a while back.

Here is my proposed fix (which was anticipated by at least one of our regulars).

James Kwak does a valuable service...

...and states the obvious.
The value of a company is supposed to be the discounted present value of its expected future cash flows. Actually, the value of a company is the discounted present value of its expected future cash flows. So it follows that a breakup should only create value for shareholders if it increases future cash flows or lowers the discount rate. Most breakups don’t obviously do either.
This may seem to border on tautology -- "of course, that's the value of a company" -- but if you follow the business page regularly you'll routinely run into strategies and initiatives that make no sense given this definition. Sometimes these decisions are justified in terms of stock price. Other times, flavor-of-the-day notions like disruption are invoked. Occasionally, there is no excuse at all;

Unless you've logged some time with a few major corporations, you can't imagine how much time and money is wasted on unadulterated bullshit largely because C-level executives lose sight of the obvious.

Wednesday, November 12, 2014

Annals of heroic inference

This is Joseph.

Via Andrew Gelman, we get this gem:
One consequence of this is that the number of respondents who report that they are not citizens yet vote or are registered to vote is quite small in absolute terms: in 2010, for example, only 13 respondents — not 13 percent, but 13 out of 55,400 respondents — reported that they were not citizens, yet had voted. Given the ever-present possibility of respondent or coder error, it takes a bit of hubris to draw strong conclusions about the behavior of non-citizens from such small numbers.
Yes, it is very hard to determine characteristics of very rare groups.  For one thing, it's unlikely that you know much about the underlying source population.  So I think the authors are right that it is going to hard to say much about this group, given this instrument.

Tuesday, November 11, 2014

"The unbookable lesson"

I've got a new post up at the teaching blog about the differences in live presentation and other educational media. Check it out if that sort of thing sounds interesting, but if you do, you might want to watch Flight of the Phoenix first.

Monday, November 10, 2014

"I'm sorry, the card says 'MOOPS'.”

Given that we are on the verge of deciding the fate of major policy initiatives based on typos, this seems sadly appropriate.

Another side to the driverless car discussion

For years now, there have been two basic narratives when it came autonomous cars. The first is what I've called the ddulite version: driverless cars are just around the corner and they are about to change our lives in strange and wonderful ways if we can just keep the regulators out of the way. The second version is more skeptical: while lower levels of autonomy are coming online every day, the truly driverless car still faces daunting technological challenges and, even if those are met, these cars may not not have the often-promised impact. (You can probably guess which side I took.)

If you follow this story through the New York Times or the Economist, you are overwhelmingly likely to get the first version, You may not even know that this bright future is contested. If, on the other hand, you talk to the engineers in the field (and I've talked to or exchanged emails with quite a few recently), you are far more likely to get the second.

This recent Slate article by Lee Gomes is one of the very few to take the second approach.
For starters, the Google car was able to do so much more than its predecessors in large part because the company had the resources to do something no other robotic car research project ever could: develop an ingenious but extremely expensive mapping system. These maps contain the exact three-dimensional location of streetlights, stop signs, crosswalks, lane markings, and every other crucial aspect of a roadway.

That might not seem like such a tough job for the company that gave us Google Earth and Google Maps. But the maps necessary for the Google car are an order of magnitude more complicated. In fact, when I first wrote about the car for MIT Technology Review, Google admitted to me that the process it currently uses to make the maps are too inefficient to work in the country as a whole.

To create them, a dedicated vehicle outfitted with a bank of sensors first makes repeated passes scanning the roadway to be mapped. The data is then downloaded, with every square foot of the landscape pored over by both humans and computers to make sure that all-important real-world objects have been captured. This complete map gets loaded into the car's memory before a journey, and because it knows from the map about the location of many stationary objects, its computer—essentially a generic PC running Ubuntu Linux—can devote more of its energies to tracking moving objects, like other cars.

But the maps have problems, starting with the fact that the car can’t travel a single inch without one. Since maps are one of the engineering foundations of the Google car, before the company's vision for ubiquitous self-driving cars can be realized, all 4 million miles of U.S. public roads will be need to be mapped, plus driveways, off-road trails, and everywhere else you'd ever want to take the car. So far, only a few thousand miles of road have gotten the treatment, most of them around the company's headquarters in Mountain View, California.  The company frequently says that its car has driven more than 700,000 miles safely, but those are the same few thousand mapped miles, driven over and over again.


Noting that the Google car might not be able to handle an unmapped traffic light might sound like a cynical game of "gotcha." But MIT roboticist John Leonard says it goes to the heart of why the Google car project is so daunting. "While the probability of a single driver encountering a newly installed traffic light is very low, the probability of at least one driver encountering one on a given day is very high," Leonard says. The list of these "rare" events is practically endless, said Leonard, who does not expect a full self-driving car in his lifetime (he’s 49).

The Google car will need a computer that can deal with anything the world throws at it.
The mapping system isn’t the only problem. The Google car doesn’t know much about parking: It can’t currently find a space in a supermarket lot or multilevel garage. It can't consistently handle coned-off road construction sites, and its video cameras can sometimes be blinded by the sun when trying to detect the color of a traffic signal. Because it can't tell the difference between a big rock and a crumbled-up piece of newspaper, it will try to drive around both if it encounters either sitting in the middle of the road. (Google specifically confirmed these present shortcomings to me for the MIT Technology Review article.) Can the car currently "see" another vehicle's turn signals or brake lights? Can it tell the difference between the flashing lights on top of a tow truck and those on top of an ambulance? If it's driving past a school playground, and a ball rolls out into the street, will it know to be on special alert? (Google declined to respond to these additional questions when I posed them.)

Computer scientists have various names for the ability to synthesize and respond to this barrage of unpredictable information: "generalized intelligence,” "situational awareness,” "everyday common sense." It's been the dream of artificial intelligence researchers since the advent of computers. And it remains just that. "None of this reasoning will be inside computers anytime soon," says Raj Rajkumar, director of autonomous driving research at Carnegie-Mellon University, former home of both the current and prior directors of Google's car project. Rajkumar adds that the Detroit carmakers with whom he collaborates on autonomous vehicles believe that the prospect of a fully self-driving car arriving anytime soon is "pure science fiction."

Saturday, November 8, 2014

The Good Wife makes some really interesting musical choices

... And, frankly, I just look for an excuse link to any piece of music that gets stuck in my head.

Friday, November 7, 2014

Speed boating

Back when I was in banking, there was a term that got batted around quite a bit called speed-boating. The expression was derived from the way a fast-traveling boat can, for a while, outrun its own wake . As long as a certain speed is maintained, the boat will travel smoothly. However, if the boat suddenly slows down it can be swamped when its wake catches up with it.

Here's how the analogy worked in banking. When you are in the business of lending money, both regulators and investors like to keep track of how well you are doing at getting people to pay their loans back. To do this, they would look at the charge-off rate. At the risk of oversimplifying, this rate was basically the number of loans that went bad divided by the total number of accounts that were open during the period in question.

Obviously, if you booked an account and it went bad, this would add one to both the numerator and the denominator which would push your rate closer to 100%. So you would think that it would always be in the banker's best interest to avoid loans that are going to go bad.

The flaw, or at least the loophole, in this assumption is the fact that the ones are not added at the same time. Even in the extreme cases where the customers never make a payment, the loan is not considered bad for a certain interval, generally 90 days or more. If the customers make a few small payments, this could stretch out for six months or year.

Let's say I book an account that goes delinquent after one year. That was a bad deal for the bank – – it lost money due to that decision – – but for one year, having that account actually lowered the bank's charge-off rate. Eventually, of course, this will catch up with the bank, but the reckoning can be delayed if the bank continues to book these bad accounts at an increasing rate.

As with many of our posts, the moral of the story is that numbers don't always mean what you think they mean. You will often see someone pull out a statistic to settle an argument -- "How can you say the business model is unstable? See how long their charge-off rate is?" -- but without understanding the number and knowing its context, you can't really say anything meaningful with it.