I've just finished debugging a iPhone demo application for an article. I ran into an interesting problem, however.
Apple's documentation strongly pushes using a Model-View-Controller design for iPhone applications. That's good. Having struggled to debug large applications that didn't use MVC, I definitely see the benefits.
Here, the application should be separated into three distinct partitions. Roughly speaking, the model controls the state. The view displays information to the user, and the Control manages the other two.
In my iPhone app, the views are built in Interface Builder. Each view has its own UIViewController. But, where should the model go? Where should it live? How do I connect it to my view controllers?
Here's the problem, most of the views are instantiated in my code. I can easily pass in the model when I instantiate the class. My root view, however, is instantiated by the nib, secretly and mysteriously in the background.
So far, the best solution I've found is to access the root view in my application delegate as follows:
- (void)applicationDidFinishLaunching:(UIApplication *)application {
model = [[Model alloc] init];
id rootController = [navigationController visibleViewController];
[rootController setModel:model];
[window addSubview:[navigationController view]];
[window makeKeyAndVisible];
}
This is not as clean as I'd like, but it's better than the other options I've come up with. In particular, there's an intriguing Proxy Object in interface builder, which sounds like an ideal solution--but I can't make it work and I haven't found any documentation on it yet.
If anyone out there has any suggestions, please let me know.
-Rich-
Wednesday, October 22, 2008
Tuesday, September 16, 2008
Last Time Machine Update
Ok, this is the last thing I'll say on the subject (probably).
Forget everything I've ever said about hosting Time Machine backups on a shared drive. Yes, it's technically possible. But, trust me. Don't do it. Just don't. It's not worth the heartache.
Go out and buy a Time Capsule. Wireless backups that actually work! I've been running mine for two months now, and I haven't had a single hiccup. Much, much better than the shared drive option.
-Rich-
Forget everything I've ever said about hosting Time Machine backups on a shared drive. Yes, it's technically possible. But, trust me. Don't do it. Just don't. It's not worth the heartache.
Go out and buy a Time Capsule. Wireless backups that actually work! I've been running mine for two months now, and I haven't had a single hiccup. Much, much better than the shared drive option.
-Rich-
Thursday, August 7, 2008
Object Lesson in Single Points of Failure
So, I bought my wife a new iPhone. Before we attached it to her computer, I decided to update all the software, and (of course) the machine choked and died. I tried everything I could think of, but I couldn't get it back up and running. We took it to the Genius Bar, and the genius in question tried everything he could think of. No luck. According to our tests, the hard drive was fine, but Leopard would not install properly.
I decided to re-partition and reformat the hard drive. But, before doing that, I wanted to make sure the user drive was backed up. I took the 500 G disk we were using for Time Machine backups and attached it directly to my wife's machine, and copied over her user directory. Then I partitioned, reformatted the disk and reloaded Leopard. Everything went well.
Until...
My lovely wife walked by, accidentally snagged her hand on the USB cable for the hard drive, and knocked it to the floor. Now it only fell a few feet onto soft carpet, but when I plugged it back in, it refused to mount and only made a strange clicking sound.
All our backups were on that drive. The time machine backups. The new backups I'd just made. Everything. All of it gone. Poof.
Now, I have secondary backups for my most important files. But, alas, my wife did not.
It's easy to feel over-confident when you have backups. But backups can fail. Whenever you only have a single copy of your data, even if it's only for a split second, you're at risk.
Lesson learned. It's not a new lesson, but one I seem to need to be reminded of from time to time. Single points of failure are bad.
-Rich-
I decided to re-partition and reformat the hard drive. But, before doing that, I wanted to make sure the user drive was backed up. I took the 500 G disk we were using for Time Machine backups and attached it directly to my wife's machine, and copied over her user directory. Then I partitioned, reformatted the disk and reloaded Leopard. Everything went well.
Until...
My lovely wife walked by, accidentally snagged her hand on the USB cable for the hard drive, and knocked it to the floor. Now it only fell a few feet onto soft carpet, but when I plugged it back in, it refused to mount and only made a strange clicking sound.
All our backups were on that drive. The time machine backups. The new backups I'd just made. Everything. All of it gone. Poof.
Now, I have secondary backups for my most important files. But, alas, my wife did not.
It's easy to feel over-confident when you have backups. But backups can fail. Whenever you only have a single copy of your data, even if it's only for a split second, you're at risk.
Lesson learned. It's not a new lesson, but one I seem to need to be reminded of from time to time. Single points of failure are bad.
-Rich-
Monday, August 4, 2008
Thoughts on Matlab
I've been using Matlab a lot at work lately.
Now, by all rights, this should be a language that I love. It's a dynamic, highly expressive language.
However, something about it just sets my teeth on edge.
To be fair, I think I can divide my complaints into two groups: issues that are my fault and issues that are the languages fault.
Matlab treats everything as matrices. OK, that's a bit of an exaggeration, but not much of one. More to the point, Matlab works best when you are performing operations on an entire vector or matrix at once--rather than iterating over the data and performing the operation on each element individually.
A lot of the built in functions and operations are designed to work across entire matrices. In fact, these functions often operate equally well on scalar values or matrices. For example,
Personally, I think this obfuscates the code. It's often difficult to tell wether we're looking at scalar, vector or matrix operations. Still, I'm willing to accept this as my own personal issue. Indeed, it is part of a larger weakness on my part. Basically, I have trouble decomposing problems into matrix operations.
OK, it's easy to do on simple cases. But, lets say I'm building a neural network. I'm storing my weights in matrices. Now, I want to minimize the amount of iterating that I'm doing--but I often have trouble seeing the opportunities to parallelize my operations. It can be done. I've built my neural network code, and I've spent a considerable amount of time replacing iterations with matrix operations. But, I don't find it a natural-feeling way to code.
One of my co-workers has commented that I write Matlab code like I'm writing Java. I think that statement shows, not only my lack of Matlab skills, but her weaknesses in Java. I can tell you without hesitation, my Matlab code is nothing like my Java code. But, the underlying criticism still stands. I am often fighting against the language, not working with it.
I see a lot of people doing this with languages I love, and I get incredibly frustrated when they then unfairly criticize those languages. So, I'll accept the blame here, and try to do better in the future.
I do think there are some real issues, however. First off, the language is often inconsistent. For example, in
Also, the environment seems a bit buggy. For example, with a single-processor machine, it is incredibly easy to put your code into an infinite loop that locks up your computer. On a dual-core, this is less of a problem, since I can ctrl-c my way to freedom, but on a single core, Matlab grabs control and won't let go.
Also, the IDE doesn't have many of the features we've come to expect from a modern development environment. There's a taste of debugging and profiling, but they are not as useful as other environments. The IDE lacks any real refactoring tools, and I haven't found any tools for running unit tests.
Finally, I don't like the way it organizes the code. Basically, each function must be in its own file. Yes, you can include multiple private, helper functions within a file, but they cannot be called from the outside. Also, I often want to test my helper functions, so I need to place them in their own file anyway, at least during development.
This really limits my ability to keep my code base organized. In other languages, I can have files of related functions, and folders of related files. Matlab removes one entire dimension. Yes, I can still group similar functions into a hierarchy of folders--and put those folders in other folders, and so on and so forth. Then I need to remember to add the entire tree to my path. It just feels really clunky to me.
So, the take-home message is this, Matlab is a great language for doing mathematical exploration of ideas and building quick prototypes, but it lacks the software engineering tools needed to build robust, large-scale projects.
-Rich-
Now, by all rights, this should be a language that I love. It's a dynamic, highly expressive language.
However, something about it just sets my teeth on edge.
To be fair, I think I can divide my complaints into two groups: issues that are my fault and issues that are the languages fault.
Matlab treats everything as matrices. OK, that's a bit of an exaggeration, but not much of one. More to the point, Matlab works best when you are performing operations on an entire vector or matrix at once--rather than iterating over the data and performing the operation on each element individually.
A lot of the built in functions and operations are designed to work across entire matrices. In fact, these functions often operate equally well on scalar values or matrices. For example,
X < Y
could be two scalar values (in which case, it will return 0 for false or 1 for true), or it could be two, equal-sized matrices (in which case, it will return a matrix of 0s and 1s). Personally, I think this obfuscates the code. It's often difficult to tell wether we're looking at scalar, vector or matrix operations. Still, I'm willing to accept this as my own personal issue. Indeed, it is part of a larger weakness on my part. Basically, I have trouble decomposing problems into matrix operations.
OK, it's easy to do on simple cases. But, lets say I'm building a neural network. I'm storing my weights in matrices. Now, I want to minimize the amount of iterating that I'm doing--but I often have trouble seeing the opportunities to parallelize my operations. It can be done. I've built my neural network code, and I've spent a considerable amount of time replacing iterations with matrix operations. But, I don't find it a natural-feeling way to code.
One of my co-workers has commented that I write Matlab code like I'm writing Java. I think that statement shows, not only my lack of Matlab skills, but her weaknesses in Java. I can tell you without hesitation, my Matlab code is nothing like my Java code. But, the underlying criticism still stands. I am often fighting against the language, not working with it.
I see a lot of people doing this with languages I love, and I get incredibly frustrated when they then unfairly criticize those languages. So, I'll accept the blame here, and try to do better in the future.
I do think there are some real issues, however. First off, the language is often inconsistent. For example, in
X < Y
, the X and Y could be either scalar or matrix values. However, X && Y
must be scalars. If you want to do logical operations on matrices, you must use and(X, Y)
. To me, this makes no sense. Why should logical operations be different than comparisons?Also, the environment seems a bit buggy. For example, with a single-processor machine, it is incredibly easy to put your code into an infinite loop that locks up your computer. On a dual-core, this is less of a problem, since I can ctrl-c my way to freedom, but on a single core, Matlab grabs control and won't let go.
Also, the IDE doesn't have many of the features we've come to expect from a modern development environment. There's a taste of debugging and profiling, but they are not as useful as other environments. The IDE lacks any real refactoring tools, and I haven't found any tools for running unit tests.
Finally, I don't like the way it organizes the code. Basically, each function must be in its own file. Yes, you can include multiple private, helper functions within a file, but they cannot be called from the outside. Also, I often want to test my helper functions, so I need to place them in their own file anyway, at least during development.
This really limits my ability to keep my code base organized. In other languages, I can have files of related functions, and folders of related files. Matlab removes one entire dimension. Yes, I can still group similar functions into a hierarchy of folders--and put those folders in other folders, and so on and so forth. Then I need to remember to add the entire tree to my path. It just feels really clunky to me.
So, the take-home message is this, Matlab is a great language for doing mathematical exploration of ideas and building quick prototypes, but it lacks the software engineering tools needed to build robust, large-scale projects.
-Rich-
Monday, July 14, 2008
One of my favorite iPhone 2.0 features...
One of my favorite features of the iPhone 2.0 is a little bit of spit and polish that's not getting much press (I haven't seen anyone else mention it, actually).
Previously, entering passwords on the iPhone was always a super pain in the butt. It was too easy to mistype something, and you couldn't tell that you'd made a mistake, since the password was all dots.
Now, the password fields dot-out all the letters except the last one, letting you see the last letter you typed. Yes, that sacrifices a bit of security, but if someone is peering that closely over your shoulder, they can probably see what you're typing anyway. And somehow, just being able to see the last letter makes it so much easier to type in my passwords correctly.
Oh, there are a lot of more-obvious features that I could also rave about, but I think the new password fields deserve a little love.
-Rich-
Previously, entering passwords on the iPhone was always a super pain in the butt. It was too easy to mistype something, and you couldn't tell that you'd made a mistake, since the password was all dots.
Now, the password fields dot-out all the letters except the last one, letting you see the last letter you typed. Yes, that sacrifices a bit of security, but if someone is peering that closely over your shoulder, they can probably see what you're typing anyway. And somehow, just being able to see the last letter makes it so much easier to type in my passwords correctly.
Oh, there are a lot of more-obvious features that I could also rave about, but I think the new password fields deserve a little love.
-Rich-
Thursday, June 12, 2008
Unimpressed...
OK, I finally managed to watch the keynote. Which keynote? Surly you are joking. The 2008 WWDC Keynote, of course.
Now, it's Thursday, and the keynote was Monday. That by itself should tell you a lot. Usually, I would try to find a way to watch a S. Jobs presentation as soon as it appeared on the Apple site. This time around, I felt no great longing to see the actual presentation. I could tell already, from the news trickling through the web, that I was going to be disappointed.
Don't get me wrong. The 3G iPhone is nice. GPS is nice. Better battery power is nice. And the new price is astounding. But, I don't think I'm going to run out and buy one. I love my iPhone, and I can't wait for the 2.0 update. I'm itching to develop my own apps for this platform. I might even buy a 3G phone for my wife. But, I don't think the changes are significant enough to warrant upgrading.
Of course, I might change my mind. Maybe a series of new, cool apps will require 3G or GPS, forcing me towards an upgrade. But, right now, I can wait.
And there are things that I'm waiting for. How long until Apple releases a 32 GB iPhone, or a 64 GB? Currently, my iTunes library sits at 27 GB. I'd definitely upgrade to a phone that could store all my media.
I'm also still waiting for Flash. Apple claims the iPhone provides real access to web pages. But, I'm sorry. Without Flash, its not a real browser. There are too many things I cannot access.
And there are the other dream features. The forward-facing video camera for mobile video conferencing. The auto-rotating marshmallow skewer and bacon stretcher. I'm looking for that unexpected Apple touch that places the new iPhones even further ahead of the competition.
Finally, I mentally place 3G and Blu-Ray in the same category. They're nice technologies, but I think they may be a little too late. I suspect on-demand, HD movie downloads will kill Blu-Ray before it ever becomes truly popular. Similarly, I think something (maybe WiMax, maybe a new technology using the soon-to-be-freed analog TV bands) will soon wipe 3G away. Of course, I'm probably dreaming of things 5 years in the future--so a 3G phone may still be a safe bet, assuming you're going to upgrade it in a few years anyway.
Next comes Mobile Me. A lot of people have raved about Mobile Me, claiming that it is much, much better than .mac. Again, there's noting wrong with it. Yes, the push email/calendar/contacts is nice. But, really. It's not a feature I need.
I don't need to receive my email instantaneously. If someone needs to contact me that desperately, they should call me. After all, it is a phone. And syncing my calendar and contacts once a day is fine. The web interfaces look cool, but I usually read my personal mail on my phone anyway, or on my home computer.
Now, if they gave my iPhone "Back To My Mac" capabilities, then we'd be talking about a technology I could get behind. Even if it just let me browse my home folder and open files remotely. The iPhone can already open Word, Excel and PDF files (not to mention a variety of media files). 2.0 will add support for PowerPoint and the whole iWork suite. I'd love to be able to browse through files on my hard drive at home, and open and view them remotely. Even better, let me email them from my iPhone, or let me copy them to a public .mac/Mobile Me folder. This would let me access my home files even when I'm trapped in a PC-only environment (like work).
Finally, there was an incredibly brief mention of Snow Leopard. Now, I can't say I'm excited about Snow Leopard, since we know next to nothing about it. However, I'm going to make a bold prediction here, based on rather sketchy evidence. First, I think the name is deliberately tied to Leopard. Snow Leopard may well be a variant on Leopard, not an entirely new OS. We've also heard that it won't contain any significantly new features--which would be appropriate for a variant.
Second, the marketing speak suspiciously refers to Snow Leopard as OS X, not Mac OS X. This may or may not be significant. Apple has already moved OS X away from the desktop with the iPhone OS. This could be a step in a similar direction. Or it might just be a marketing decision.
Finally, it is apparently Intel only and fully 64 bit. Here's the question no one has asked, does that mean it won't run on 32-bit intel macs? There are a number of those floating around. My MacBook Pro is one. Will I be cut out in the cold?
Or, is Snow Leopard an OS for an entirely new type of device? Something that can leverage the touch technologies pioneered in the iPhone? I'm not saying a tablet. I've been waiting for an Apple tablet so long now, I've basically given up hope. But, what about a laptop with a touch screen. Alternatively, Apple could be planning new hardware with a significant jump in the number of cores. Snow Leopard seems to emphasize parallel computing--it might be nice to have new hardware that could really take advantage of Grand Central.
The bottom line is, I don't think Snow Leopard is an OS for the computers we have today. I'm not sure what it runs on, but I hope it will be a nice surprise.
OK, back to the Keynote. While everything that Steve presented was good and interesting, nothing made me want to rip off my shirt and dance topless in the isles (and thank goodness for that!). I think Apple's moving in the right direction, but I can't help but be a little disappointed. Where are the surprises? The rumor sites scooped almost everything. Where's the "one more thing?"
Maybe I have set my sights too high, but I can't help but feel a little let down.
-Rich-
Now, it's Thursday, and the keynote was Monday. That by itself should tell you a lot. Usually, I would try to find a way to watch a S. Jobs presentation as soon as it appeared on the Apple site. This time around, I felt no great longing to see the actual presentation. I could tell already, from the news trickling through the web, that I was going to be disappointed.
Don't get me wrong. The 3G iPhone is nice. GPS is nice. Better battery power is nice. And the new price is astounding. But, I don't think I'm going to run out and buy one. I love my iPhone, and I can't wait for the 2.0 update. I'm itching to develop my own apps for this platform. I might even buy a 3G phone for my wife. But, I don't think the changes are significant enough to warrant upgrading.
Of course, I might change my mind. Maybe a series of new, cool apps will require 3G or GPS, forcing me towards an upgrade. But, right now, I can wait.
And there are things that I'm waiting for. How long until Apple releases a 32 GB iPhone, or a 64 GB? Currently, my iTunes library sits at 27 GB. I'd definitely upgrade to a phone that could store all my media.
I'm also still waiting for Flash. Apple claims the iPhone provides real access to web pages. But, I'm sorry. Without Flash, its not a real browser. There are too many things I cannot access.
And there are the other dream features. The forward-facing video camera for mobile video conferencing. The auto-rotating marshmallow skewer and bacon stretcher. I'm looking for that unexpected Apple touch that places the new iPhones even further ahead of the competition.
Finally, I mentally place 3G and Blu-Ray in the same category. They're nice technologies, but I think they may be a little too late. I suspect on-demand, HD movie downloads will kill Blu-Ray before it ever becomes truly popular. Similarly, I think something (maybe WiMax, maybe a new technology using the soon-to-be-freed analog TV bands) will soon wipe 3G away. Of course, I'm probably dreaming of things 5 years in the future--so a 3G phone may still be a safe bet, assuming you're going to upgrade it in a few years anyway.
Next comes Mobile Me. A lot of people have raved about Mobile Me, claiming that it is much, much better than .mac. Again, there's noting wrong with it. Yes, the push email/calendar/contacts is nice. But, really. It's not a feature I need.
I don't need to receive my email instantaneously. If someone needs to contact me that desperately, they should call me. After all, it is a phone. And syncing my calendar and contacts once a day is fine. The web interfaces look cool, but I usually read my personal mail on my phone anyway, or on my home computer.
Now, if they gave my iPhone "Back To My Mac" capabilities, then we'd be talking about a technology I could get behind. Even if it just let me browse my home folder and open files remotely. The iPhone can already open Word, Excel and PDF files (not to mention a variety of media files). 2.0 will add support for PowerPoint and the whole iWork suite. I'd love to be able to browse through files on my hard drive at home, and open and view them remotely. Even better, let me email them from my iPhone, or let me copy them to a public .mac/Mobile Me folder. This would let me access my home files even when I'm trapped in a PC-only environment (like work).
Finally, there was an incredibly brief mention of Snow Leopard. Now, I can't say I'm excited about Snow Leopard, since we know next to nothing about it. However, I'm going to make a bold prediction here, based on rather sketchy evidence. First, I think the name is deliberately tied to Leopard. Snow Leopard may well be a variant on Leopard, not an entirely new OS. We've also heard that it won't contain any significantly new features--which would be appropriate for a variant.
Second, the marketing speak suspiciously refers to Snow Leopard as OS X, not Mac OS X. This may or may not be significant. Apple has already moved OS X away from the desktop with the iPhone OS. This could be a step in a similar direction. Or it might just be a marketing decision.
Finally, it is apparently Intel only and fully 64 bit. Here's the question no one has asked, does that mean it won't run on 32-bit intel macs? There are a number of those floating around. My MacBook Pro is one. Will I be cut out in the cold?
Or, is Snow Leopard an OS for an entirely new type of device? Something that can leverage the touch technologies pioneered in the iPhone? I'm not saying a tablet. I've been waiting for an Apple tablet so long now, I've basically given up hope. But, what about a laptop with a touch screen. Alternatively, Apple could be planning new hardware with a significant jump in the number of cores. Snow Leopard seems to emphasize parallel computing--it might be nice to have new hardware that could really take advantage of Grand Central.
The bottom line is, I don't think Snow Leopard is an OS for the computers we have today. I'm not sure what it runs on, but I hope it will be a nice surprise.
OK, back to the Keynote. While everything that Steve presented was good and interesting, nothing made me want to rip off my shirt and dance topless in the isles (and thank goodness for that!). I think Apple's moving in the right direction, but I can't help but be a little disappointed. Where are the surprises? The rumor sites scooped almost everything. Where's the "one more thing?"
Maybe I have set my sights too high, but I can't help but feel a little let down.
-Rich-
Friday, May 30, 2008
Frustrated when people just don't get it...
Ok, I'm not one of those "Ruby (or whatever language) will kill java" people. I don't like Java. I'm a proficient Java programmer. I probably know it better than any of my preferred languages, since I've had to use it so much for work. I don't wish it would die, though I do wish I could get away from it for a while...
Still, I found myself getting a bit hot under the collar when I read 13 reasons why ruby, python and the gang will push java to die… of old age.
Just look at some of the things that spiked my blood-pressure:
To me, this seems like arguing for ignorance. Learning languages is too hard. Learning a new syntax is too hard. Why bother.
And, perhaps it's true for a certain number of programmers. Heck, I know people at work who feel exactly this way. It may even be the majority opinion, for all I know.
But that doesn't make it a good attitude.
I prefer to follow the Pragmatic Programmer's advice. I try to learn a new language every year.
Now, I don't expect to use all these languages on real projects. I'd like to, don't get me wrong. I can often be found whining about how Project X would be so much easier if we could only use Language Y. But that's beside the point. I study programming languages, because studying a variety of programming languages makes you a better programmer, even if you never use them professionally.
Let's move away from computer languages for a second and look at natural languages. It's often said that languages affect how we think. My wife is Japanese, and I know enough Japanese to get by (though her English is much better than my Japanese will ever be). There are definitely concepts in Japanese that I just cannot express in English. We just don't have the words. Sure, I can describe the idea in a round-about way. But I cannot say it directly.
This is often frustrating. If I've been using Japanese a lot, I often find myself unable to express certain feelings accurately in English. Now, the truly odd part is, I never had a need to express those feelings until I learned Japanese. My mental model of the world simply did not include those ideas, at least, not as a crisp, well-defined concept.
The same thing happens with programming languages. Knowing how to decompose a problem for a functional language is very different than knowing how to decompose a problem for an object-oriented language. It provides an entirely different set of tools for breaking down and understanding complexity.
Also, learning a new language often forces you to examine your preconceived notions. Often you find that your preconceived notions are just wrong.
And here's the best part. Just because you cannot use the tools and techniques from one language directly in another, doesn't mean the effort was wasted. Often you can borrow a really useful gem or two. Having programmed in Lisp, I now find it much easier to program recursive traversals of complex data structures. After using Smalltalk for a while, I find I write much smaller methods in Java--which makes my Java code much easier to read and maintain.
So, yes. Learning one programming language is hard. Learning a second is somewhat easier, but it's still a chore. But, by the time you get to your fourth or fifth language, it gets a lot easier.
Finally from a section labeled "Why many of the new languages will never be popular"
This entirely misses the point. First, I'll repeat my argument from above. binding and twisting your mind is good because it forces you to approach problems from a new perspective. That should be justification enough. But wait, there's more!
Functional languages (or any expressive language) often allow you to model complex ideas concisely, but we're not interested in just saving keystrokes. Concise languages have less code. There is a direct correlation between the size of your codebase and the number of bugs in your program. There's also a correlation between size and maintainability.
Let's make this concrete. When I took my Natural Language Processing course, I did all the assignments in Lisp. The TA was a friend of mine, and he often commented that my code was an order of magnitude smaller than the other students. I could do more with 200 lines of Lisp than they could do with 2,000 lines of Java. That's not "just a little typing." My code had fewer bugs. It typically ran faster, and I never had memory issues that often plagued other students.
Now, maybe that's not a fair comparison. These projects fell squarely within Lisp's sweet-spot: command line tools that manipulated lists of symbols. If I was building a UI, I'd probably pick a different tool.
Now, before someone else mentions it, I admit it. Nothing I have said here contradicts 13 Reason's main argument. These languages that I love may never become popular. They may never kill off Java. But, does that really even matter? Popularity has never been a measurement of quality. Trust me. My daughter loves Hannah Montana. As far as I can tell, every tween girl in the country loves Hannah Montana. Yet, I know in my heart that it is not quality music (or quality TV for that matter).
I do hope we move from a monolithic language approach to a polyglot language approach--but Java doesn't have to die for that to happen. It just needs to learn to share.
-Rich-
Still, I found myself getting a bit hot under the collar when I read 13 reasons why ruby, python and the gang will push java to die… of old age.
Just look at some of the things that spiked my blood-pressure:
Reason number 1: Syntax is very important because it builds on previous knowledge. Also similar syntax means similar concepts. Programmers have to make less effort to learn the new syntax, can reuse the old concepts and thus they can concentrate on understanding the new concepts.
Reason number 2: Too much noise is distracting. Programmers are busy and learning 10 languages to the level where they can evaluate them and make an educated decision is too much effort. The fact that most of these languages have a different syntax and introduce different (sometimes radically different) concepts doesn’t help either.
Reason number 6: There is no great incentive to switch to one of the challenger languages since gaining this skill is not likely to translate into income in the near future.
To me, this seems like arguing for ignorance. Learning languages is too hard. Learning a new syntax is too hard. Why bother.
And, perhaps it's true for a certain number of programmers. Heck, I know people at work who feel exactly this way. It may even be the majority opinion, for all I know.
But that doesn't make it a good attitude.
I prefer to follow the Pragmatic Programmer's advice. I try to learn a new language every year.
Now, I don't expect to use all these languages on real projects. I'd like to, don't get me wrong. I can often be found whining about how Project X would be so much easier if we could only use Language Y. But that's beside the point. I study programming languages, because studying a variety of programming languages makes you a better programmer, even if you never use them professionally.
Let's move away from computer languages for a second and look at natural languages. It's often said that languages affect how we think. My wife is Japanese, and I know enough Japanese to get by (though her English is much better than my Japanese will ever be). There are definitely concepts in Japanese that I just cannot express in English. We just don't have the words. Sure, I can describe the idea in a round-about way. But I cannot say it directly.
This is often frustrating. If I've been using Japanese a lot, I often find myself unable to express certain feelings accurately in English. Now, the truly odd part is, I never had a need to express those feelings until I learned Japanese. My mental model of the world simply did not include those ideas, at least, not as a crisp, well-defined concept.
The same thing happens with programming languages. Knowing how to decompose a problem for a functional language is very different than knowing how to decompose a problem for an object-oriented language. It provides an entirely different set of tools for breaking down and understanding complexity.
Also, learning a new language often forces you to examine your preconceived notions. Often you find that your preconceived notions are just wrong.
And here's the best part. Just because you cannot use the tools and techniques from one language directly in another, doesn't mean the effort was wasted. Often you can borrow a really useful gem or two. Having programmed in Lisp, I now find it much easier to program recursive traversals of complex data structures. After using Smalltalk for a while, I find I write much smaller methods in Java--which makes my Java code much easier to read and maintain.
So, yes. Learning one programming language is hard. Learning a second is somewhat easier, but it's still a chore. But, by the time you get to your fourth or fifth language, it gets a lot easier.
Finally from a section labeled "Why many of the new languages will never be popular"
Some languages have very difficult to “get” concepts. For example most of the supporters of functional languages are proud of how concise statements are in their language. This is not really useful for somebody used to think procedural or object oriented. If the only gain from binding and twisting your mind is typing a few less lines then any experience programmer will tell you that this is not the main activity. Writing the first version is just a small part of the life cycle of a project. Typing the code is even smaller compared with the design time. From the second version the game changes dramatically. Maintainability is way more important. Also very important is to be able to add features and to refactor the code. Readability is paramount from version two, and for both development and support teams.
This entirely misses the point. First, I'll repeat my argument from above. binding and twisting your mind is good because it forces you to approach problems from a new perspective. That should be justification enough. But wait, there's more!
Functional languages (or any expressive language) often allow you to model complex ideas concisely, but we're not interested in just saving keystrokes. Concise languages have less code. There is a direct correlation between the size of your codebase and the number of bugs in your program. There's also a correlation between size and maintainability.
Let's make this concrete. When I took my Natural Language Processing course, I did all the assignments in Lisp. The TA was a friend of mine, and he often commented that my code was an order of magnitude smaller than the other students. I could do more with 200 lines of Lisp than they could do with 2,000 lines of Java. That's not "just a little typing." My code had fewer bugs. It typically ran faster, and I never had memory issues that often plagued other students.
Now, maybe that's not a fair comparison. These projects fell squarely within Lisp's sweet-spot: command line tools that manipulated lists of symbols. If I was building a UI, I'd probably pick a different tool.
Now, before someone else mentions it, I admit it. Nothing I have said here contradicts 13 Reason's main argument. These languages that I love may never become popular. They may never kill off Java. But, does that really even matter? Popularity has never been a measurement of quality. Trust me. My daughter loves Hannah Montana. As far as I can tell, every tween girl in the country loves Hannah Montana. Yet, I know in my heart that it is not quality music (or quality TV for that matter).
I do hope we move from a monolithic language approach to a polyglot language approach--but Java doesn't have to die for that to happen. It just needs to learn to share.
-Rich-
Tuesday, May 13, 2008
Two new articles.
I just received my May issue of MacTech Magazine, and I have two articles in this issue. Part II of my RubyCocoa article and an overview of the iPhone SDK (at least, what I can talk about without violating my NDA).
Hopefully I'll be able to talk a lot more about the iPhone SDK soon. However, I can safely say, good things are coming.
-Rich-
Hopefully I'll be able to talk a lot more about the iPhone SDK soon. However, I can safely say, good things are coming.
-Rich-
Saturday, May 3, 2008
The problem with JUnit
I'm a big unit test fan. However, I often feel that the goals of testing run counter to the goals of good Object Oriented design. At least in static languages like Java.
Object oriented design is based on the idea of encapsulating behavior. Testing is an attempt to reveal and examine behavior. You can't have both.
I often run into this problem when implementing algorithms inside an object. From an OO perspective, I often want to declare all the methods that perform the scary math as private. They should never be called from the outside. However, I really need to test them somewhere.
Problems also crop up when calling methods that change an object's state. The state is not always directly exposed, and sometimes it's nearly impossible to indirectly detect the change.
I don't feel the same tension when testing in dynamic languages like Ruby. Usually, these languages have a stronger reflexion or metaprogramming functionality. They let me safely encapsulate the things that should remain hidden, but still allow me to pry my objects open and root around in the guts.
I can write tests using reflection in Java, but the syntax is painful. I can get around that with a library of helper methods--but for some reason, reflection makes many Java developers uncomfortable, even when only used in testing. And, truth be told, it never feels as natural as the dynamic language tests.
Similarly, I can use a mixture of interfaces and mock objects (for example, using EasyMock) to let me examine the inner workings of a class. Often this simplifies writing the tests, but the results can be incredibly brittle. Building useful mock objects generally requires a detailed understanding of our classes inner workings. If I change the implementation, I will break my mock objects, and then break my tests--even if the new version is functionally identical to the old.
Of course, even the reflection-based testing is somewhat brittle. Reflection, by definition, looks at the implementation, not the interface. But, our interactions with the implementation tend to be more surgical and specific. So, while these tests are somewhat brittle, they tend to be more resilient than the mock-object versions.
I can try to get around all these problems by redesigning my objects. Move my algorithms to a utility class, where they are publicly exposed and easy to test, or add accessors to the internal state, even if the accessors should never be used for non-test code. While this works, it can lead to unnecessarily awkward designs, or exposing more of the implementation than is really necessary.
I think, ultimately, a mixed approach is best. Like many software engineering tasks, we must examine our design and decide which sections are likely to change, and which are likely to remain the same. Static sections can often be effectively tested using reflection or mock objects. Sections that are likely to change should be encapsulated and tested as separate objects.
The trick is then to successfully separate one from the other.
Object oriented design is based on the idea of encapsulating behavior. Testing is an attempt to reveal and examine behavior. You can't have both.
I often run into this problem when implementing algorithms inside an object. From an OO perspective, I often want to declare all the methods that perform the scary math as private. They should never be called from the outside. However, I really need to test them somewhere.
Problems also crop up when calling methods that change an object's state. The state is not always directly exposed, and sometimes it's nearly impossible to indirectly detect the change.
I don't feel the same tension when testing in dynamic languages like Ruby. Usually, these languages have a stronger reflexion or metaprogramming functionality. They let me safely encapsulate the things that should remain hidden, but still allow me to pry my objects open and root around in the guts.
I can write tests using reflection in Java, but the syntax is painful. I can get around that with a library of helper methods--but for some reason, reflection makes many Java developers uncomfortable, even when only used in testing. And, truth be told, it never feels as natural as the dynamic language tests.
Similarly, I can use a mixture of interfaces and mock objects (for example, using EasyMock) to let me examine the inner workings of a class. Often this simplifies writing the tests, but the results can be incredibly brittle. Building useful mock objects generally requires a detailed understanding of our classes inner workings. If I change the implementation, I will break my mock objects, and then break my tests--even if the new version is functionally identical to the old.
Of course, even the reflection-based testing is somewhat brittle. Reflection, by definition, looks at the implementation, not the interface. But, our interactions with the implementation tend to be more surgical and specific. So, while these tests are somewhat brittle, they tend to be more resilient than the mock-object versions.
I can try to get around all these problems by redesigning my objects. Move my algorithms to a utility class, where they are publicly exposed and easy to test, or add accessors to the internal state, even if the accessors should never be used for non-test code. While this works, it can lead to unnecessarily awkward designs, or exposing more of the implementation than is really necessary.
I think, ultimately, a mixed approach is best. Like many software engineering tasks, we must examine our design and decide which sections are likely to change, and which are likely to remain the same. Static sections can often be effectively tested using reflection or mock objects. Sections that are likely to change should be encapsulated and tested as separate objects.
The trick is then to successfully separate one from the other.
Wednesday, April 9, 2008
Interested in Nu
If you haven't already, check out Nu. A LISP variant, implemented in Objective-C with heavy Ruby sensibilities. It's like all of my favorite things, rolled into one.
Well...It would be nice if it had XCode support. And it's seems a long ways away from a 1.0 release. Still, it looks quite interesting.
-Rich-
Well...It would be nice if it had XCode support. And it's seems a long ways away from a 1.0 release. Still, it looks quite interesting.
-Rich-
Monday, March 24, 2008
Time Machine Update Update
Nope, my wife's computer has been refusing to do backups since shortly after the 10.5.2 update. Apparently, she got tired of seeing it complain, and just shut it off. Anyway, it's not just me.
Wednesday, March 19, 2008
Time Machine Update
Here are a few things I've learned about Time Machine since writing the article.
1) I had to send my MacBook in for repairs. Apple replaced the logic board. Therefor, I had a new ethernet MAC address. Therefor I could no longer access my backups. I found some hackish instructions online that helped me fix it. But, I told them that I had backups when I sent it in. If they had wiped my hard drive, I would have panicked.
2) When booting from a Leopard DVD, I did not have the option to restore from backup. The problem seems to be, I need to setup the wireless to connect to the network, and I need to log into the host machine before getting access to the backup bundle. The software on the Leopard DVD simply doesn't give me the option to do that.
In the end, I erased and reinstalled OS X on the machine. Set up a dummy account. Used that account to access the network and log into the host machine. Then I connected to the time machine backup and restored all my data.
I think there may be a better option here. I think I spotted something that suggested connecting the drive directly (even though you typically cannot connect the hard drive directly if you've been doing remote backups). I don't know. It sounds sketchy to me. But it might be worth a try the next time a machine goes down.
3) 10.5.2 seems to hate Time Machine. Since upgrading, my backup has become more and more unreliable, and has taken longer and longer. Finally, it stopped working entirely. I couldn't even mount the bundle anymore. I let Disk Warrior work on it for three days. It reported over 50,000 errors, but wasn't able to do anything useful. In the end, I had to delete the bundle and start from scratch.
Actually, it may have been the hacks I mentioned in step 1 that eventually killed it--but I don't think so. Other people seem to be having the same problem. I thought 10.5.2 was supposed to improve Time Machine stability.
4) While I cannot mount my backup sparse bundle directly on the host machine, I can mount any backup bundle from any remotely logged in machine. No password needed. That's kind of scary.
That's it. If you have any other tidbits, list them in the comments.
-Rich-
1) I had to send my MacBook in for repairs. Apple replaced the logic board. Therefor, I had a new ethernet MAC address. Therefor I could no longer access my backups. I found some hackish instructions online that helped me fix it. But, I told them that I had backups when I sent it in. If they had wiped my hard drive, I would have panicked.
2) When booting from a Leopard DVD, I did not have the option to restore from backup. The problem seems to be, I need to setup the wireless to connect to the network, and I need to log into the host machine before getting access to the backup bundle. The software on the Leopard DVD simply doesn't give me the option to do that.
In the end, I erased and reinstalled OS X on the machine. Set up a dummy account. Used that account to access the network and log into the host machine. Then I connected to the time machine backup and restored all my data.
I think there may be a better option here. I think I spotted something that suggested connecting the drive directly (even though you typically cannot connect the hard drive directly if you've been doing remote backups). I don't know. It sounds sketchy to me. But it might be worth a try the next time a machine goes down.
3) 10.5.2 seems to hate Time Machine. Since upgrading, my backup has become more and more unreliable, and has taken longer and longer. Finally, it stopped working entirely. I couldn't even mount the bundle anymore. I let Disk Warrior work on it for three days. It reported over 50,000 errors, but wasn't able to do anything useful. In the end, I had to delete the bundle and start from scratch.
Actually, it may have been the hacks I mentioned in step 1 that eventually killed it--but I don't think so. Other people seem to be having the same problem. I thought 10.5.2 was supposed to improve Time Machine stability.
4) While I cannot mount my backup sparse bundle directly on the host machine, I can mount any backup bundle from any remotely logged in machine. No password needed. That's kind of scary.
That's it. If you have any other tidbits, list them in the comments.
-Rich-
Tuesday, March 4, 2008
Sometimes I think Apple's out to get me!
So, I write an article about installing Ruby for Rails on Tiger. Then Apple announces that RoR will be included in Leopard.
Then I write an article about using TimeMachine over a wireless connection. Just days after the magazine hits the stand, Steve Jobs announces Time Capsule.
Now, I've just finished a pair of articles on RubyCocoa. They haven't even been published yet, but Apple's already at it again.
Today, I just learned about a new, open source project backed by Apple, called MacRuby.
MacRuby is a Ruby 1.9 port that runs on top of Objective-C. It's not ready for prime time yet, but it looks promising. First off, it is Ruby 1.9--which I think is a great thing. Among other things, this means it will be much, much faster than RubyCocoa, which uses Ruby 1.8.
From there, things get really interesting--and just a bit odd. All MacRuby objects are subclasses of NSObject. Because of this, they inherit all the base object methods from both Objective-C and Ruby. They've also added an expanded syntax for keyed attributes in method calls.
So,
becomes
or
You can even write your own classes in Ruby that use keyed attributes:
Of course, you can do all the cool RubyCocoa tricks. Make calls back and forth between the Objective-C and Ruby portions of your code. Import Cocoa frameworks. Etc. But, in MacRuby, it all looks just a little bit tighter.
For example, MacRuby's String, Array and Hash classes are simply subclasses of Objective-C's NSString, NSArray and NSDictionary. This lets you transparently pass objects between Ruby and Objective-C.
I look forward to playing around with this project as it develops.
-Rich-
Then I write an article about using TimeMachine over a wireless connection. Just days after the magazine hits the stand, Steve Jobs announces Time Capsule.
Now, I've just finished a pair of articles on RubyCocoa. They haven't even been published yet, but Apple's already at it again.
Today, I just learned about a new, open source project backed by Apple, called MacRuby.
MacRuby is a Ruby 1.9 port that runs on top of Objective-C. It's not ready for prime time yet, but it looks promising. First off, it is Ruby 1.9--which I think is a great thing. Among other things, this means it will be much, much faster than RubyCocoa, which uses Ruby 1.8.
From there, things get really interesting--and just a bit odd. All MacRuby objects are subclasses of NSObject. Because of this, they inherit all the base object methods from both Objective-C and Ruby. They've also added an expanded syntax for keyed attributes in method calls.
So,
[person setFirstName:first lastName:last];
becomes
person.setFirstName(first, lastName:last)
or
person.setFirstName first, :lastName => last
You can even write your own classes in Ruby that use keyed attributes:
def setFirstName(first, lastName:last)
@name = "#{first} #{last}"
end
Of course, you can do all the cool RubyCocoa tricks. Make calls back and forth between the Objective-C and Ruby portions of your code. Import Cocoa frameworks. Etc. But, in MacRuby, it all looks just a little bit tighter.
For example, MacRuby's String, Array and Hash classes are simply subclasses of Objective-C's NSString, NSArray and NSDictionary. This lets you transparently pass objects between Ruby and Objective-C.
I look forward to playing around with this project as it develops.
-Rich-
Sunday, March 2, 2008
Review of The Unofficial Lego Mindstorms NXT Inventor's Guide
I love the Lego Mindstorm kits, and I've always enjoyed No Starch Press's books. So I was excited to hear about their Unofficial Lego Mindstorms NXT Inventor's Guide (ULMNIG). These are two great tastes that taste great together.
Unfortunately, I was led a bit astray by the title. "Inventor's Guide", to me, summons mental images of crazy legos hacks, but that's not the goal of this book.
In the introduction, the ULMNIG describes its true intentions--taking you beyond the user guide and instructions that came with the Mindstorm kit. It does not assume any previous experience with Lego or Mindstorms, but helps you explore a broader range or projects and possibilities.
As an entry level book, I think the ULMNIG overwhelmingly succeeds.
The book starts with a description of the lego pieces, then provides basic guidelines for building sturdy structures and functional gear trains. For me, this was the weakest part of the book. Don't get me wrong. It has solid information, and should be useful for beginning builders. But it felt too short and too superficial for my tastes.
The ULMNIG then spends two chapters exploring the NXT-G programming language in detail. If you are going to use NXT-G, then you need to read these chapters. They provide a lot of information that will help you get the most out of your Mindstorm brick. They are also much clearer and more informative than the user manual. Reading these chapters will save you from hours of frustrating trial and error.
Finally the last half of the book covers six new robot designs. Four of these designs are radically different from each other. One is a differential drive with a ball castor. One is a four-wheeled steering vehicle. One is a six-legged walking motion sensor, and one is a stationary bot. There are also two variations on the differential-drive bot.
This gives you a nice combination of projects. The designs increase in complexity, allowing you to improve your skills as you progress through them. Building them will teach you a wide range of design techniques, while the variations show you how you can modify existing designs for other purposes.
The projects are definitely the highlight of the book. Working through the projects will teach you more about building robots than the rest of the book combined. And, once your finished, you should be ready to jump into your own projects.
Unfortunately, advanced builders/programmers might find themselves somewhat disappointed with this book. The ULMNIG hints at several advanced topics: building dynamic structures and third party programming languages. Unfortunately, these only get the briefest introduction. A few paragraphs each, tops. And the ULMNIG doesn't even mention other advanced topics, like third-party sensors and hardware, or attaching your own circuits to the NXT brick.
So, I would not recommend this book for everyone. But, if you've finished all the projects in the Mindstorm Users Guide, but your still struggling to build your own robots, then this is definitely the book for you.
-Rich-
Unfortunately, I was led a bit astray by the title. "Inventor's Guide", to me, summons mental images of crazy legos hacks, but that's not the goal of this book.
In the introduction, the ULMNIG describes its true intentions--taking you beyond the user guide and instructions that came with the Mindstorm kit. It does not assume any previous experience with Lego or Mindstorms, but helps you explore a broader range or projects and possibilities.
As an entry level book, I think the ULMNIG overwhelmingly succeeds.
The book starts with a description of the lego pieces, then provides basic guidelines for building sturdy structures and functional gear trains. For me, this was the weakest part of the book. Don't get me wrong. It has solid information, and should be useful for beginning builders. But it felt too short and too superficial for my tastes.
The ULMNIG then spends two chapters exploring the NXT-G programming language in detail. If you are going to use NXT-G, then you need to read these chapters. They provide a lot of information that will help you get the most out of your Mindstorm brick. They are also much clearer and more informative than the user manual. Reading these chapters will save you from hours of frustrating trial and error.
Finally the last half of the book covers six new robot designs. Four of these designs are radically different from each other. One is a differential drive with a ball castor. One is a four-wheeled steering vehicle. One is a six-legged walking motion sensor, and one is a stationary bot. There are also two variations on the differential-drive bot.
This gives you a nice combination of projects. The designs increase in complexity, allowing you to improve your skills as you progress through them. Building them will teach you a wide range of design techniques, while the variations show you how you can modify existing designs for other purposes.
The projects are definitely the highlight of the book. Working through the projects will teach you more about building robots than the rest of the book combined. And, once your finished, you should be ready to jump into your own projects.
Unfortunately, advanced builders/programmers might find themselves somewhat disappointed with this book. The ULMNIG hints at several advanced topics: building dynamic structures and third party programming languages. Unfortunately, these only get the briefest introduction. A few paragraphs each, tops. And the ULMNIG doesn't even mention other advanced topics, like third-party sensors and hardware, or attaching your own circuits to the NXT brick.
So, I would not recommend this book for everyone. But, if you've finished all the projects in the Mindstorm Users Guide, but your still struggling to build your own robots, then this is definitely the book for you.
-Rich-
Monday, February 25, 2008
Some articles online
MacTech magazine has released three of my articles online. Please check them out.
1. Introduction to Ruby on Rails
2. Ajax on Rails
3. Lego NXT on the Mac
1. Introduction to Ruby on Rails
2. Ajax on Rails
3. Lego NXT on the Mac
Sunday, February 17, 2008
Hi
Please excuse the dust. Things are still under construction here.
Hopefully I'll get everything polished away sometime this week.
This blog will focus on things that I think are cool: robotics, AI, Apple products, Lisp, Objective-C, Smalltalk, Ruby, DSLs, and probably the occasional half-blind-with-fury rant about Java. Hopefully it will serve two purposes. 1) Most importantly, I will post additional information about my articles here. This may include changes, additions or support material. 2) It will server as a dumping ground for all the semi-random tech ideas that pop into my head. Sorry about that.
Well, I hope you enjoy it.
-Rich-
Hopefully I'll get everything polished away sometime this week.
This blog will focus on things that I think are cool: robotics, AI, Apple products, Lisp, Objective-C, Smalltalk, Ruby, DSLs, and probably the occasional half-blind-with-fury rant about Java. Hopefully it will serve two purposes. 1) Most importantly, I will post additional information about my articles here. This may include changes, additions or support material. 2) It will server as a dumping ground for all the semi-random tech ideas that pop into my head. Sorry about that.
Well, I hope you enjoy it.
-Rich-
Subscribe to:
Posts (Atom)