Saturday, January 30, 2010

Choosing a Supervisor, part 4

6. Talk to Ex-students
Perhaps one of the most important things you can do is to speak with current and ex-students of your potential supervisor. They will give you the real story of what it is like to work with him or her, what the research group's character is like, etc.
Some questions to ask include the following: How long does the typical student take to finish their PhD in your potential supervisor's group? What are the outliers like and why? Does the student feel like a peer of his or her supervisor, or are they more deferential? What fraction of students that start a PhD with your potential supervisor actually finish? Has the supervisor lost students to other groups or supervisors? How does the supervisor work with students? How much time does the supervisor give to their students? Do they have an open-door policy?
I am certain that the more outgoing senior graduate students in the group will be happy to share their experiences. If none are willing to speak up, consider that a sign in and of itself. Recent graduates are also much more likely to give you the inside skinny, for obvious reasons.
7. Public Presence
Being a successful scientist is half about doing the science and half about communicating the science. Some scientists communicate strictly through the peer-reviewed research paper, while others use every new "Web 2.0" medium that pops up in recent years?
How up-to-date is the potential supervisor's web presence? Do they use Twitter? Have a blog? Use Facebook? Do they convey research results in a modern and timely manner, or are they more "old fashioned?" Does any of their research occasionally hit printed mainstream media? Are there press releases? Is your potential supervisor ever quoted in the media or seen on television?
What is your attitude toward communicating results? What is the role of the citizen scientist and her relation to the media?
8. Attitude toward Teaching
What is your potential supervisor's attitude toward teaching undergraduates and graduate students? Do they volunteer to give guest lectures at high schools, summer interns, incoming students, and other non-core student cohorts? Do they connect teaching with their research, or are these two main facets of their work lives disconnected?
Or do they think of teaching as an unwelcome burden, their time being better spent focusing entirely on supervising or conducting research?
What is your attitude toward teaching? Do you want to learn from and be inspired by a great instructor, or is teaching a necessary evil? Do you want to work with someone who spends time writing textbooks, or is the only thing worth writing the paper in the number one conference in the area?

Labels: , ,

Wednesday, January 27, 2010

Choosing a Supervisor, part 3

This is part three of the post, Choosing a Supervisor.
The next few questions I suggest asking yourself before choosing a supervisor are as follows:
3. What is their noteriety?
The personality and demeanor of a supervisor are critical to your suceess. In particular, their personality needs to "fit" with yours.
Some people work very well under high pressure with a very critical supervisor; others crumble. Some want a very hands-off supervisor and lots of independence. Others want a kid-glove and only to hear positive feedback.
Each supervisor has a very different approach to interacting with PhD students, and often that approach is very consistent over a multi-year timeframe. Some supervisors are just asses, plain and simple, but because they are research superstars, it is tolerated.
Do you want to work for an ass? A nice guy? An introvert? An extrovert?
4. How do they work?
Different researchers work different ways. To be a successful world-class researcher requires discipline and commitment, but it does not necessarily require giving up a social life, a family, and television. The way that your potential supervisor works oftens tells you how they expect you to work as well.
Are they an 9 to 5 type, or will you find them in their office at 11pm? Do they work directly with students in their lab, or do they only "manage" research? Do they seem to work very hard, but without a lot of results, or vice versa? Do they expect their students to work as hard and long as they do, or much harder or longer? Do they have families? Kids?
Do you want to work for a workaholic with a family that they never see? Or perhaps a professor that has found a good balance between family and work? Or a single person completely committed to their career?
5. Theoretician vs. Experimentalist/Breadth vs. Depth
Like many fields of science, computing sees some research that is purely theoretical and often requires nothing more than a pencil, a paper, and grey matter. Other groups eschew such research and focus entirely on an experimental approach, building realistic, non-trivial systems that are evaluated through quantitatively analysis.
The notion of "experiment" also varies from sub-discipline to sub-discipline. Within some communities the notion is quite ad hoc, while in others it is very rigorous.
Finally, some research groups are very focused and deep, exploring one specific topic for many years. Others are quite broad, permitting their students to choose topics from a broad set of topics---sometimes all of computing!
What kind of group do you want to be a part of? Do you prefer rigorous mathematical foundations, or would you rather just build and measure systems? Perhaps you see the value and attraction of both approaches? Do you want to be the single world-expert in one highly-focused topic, or would you rather play in a bigger sandbox, perhaps not digging as deeply?

Labels: , ,

Wednesday, January 20, 2010

Applied Formal Methods at National Taiwan University

On 1 January, Joe wrote about his New Year's resolutions, one of which involved doing more "public" writing in 2010. He said that he had tried to draft me into this resolution with him, but was not sure whether he succeeded. He did, in fact, succeed... and so my goal, like his, is to write or edit at least 1,000 public words per day (on average).

Unfortunately, I've fallen somewhat behind. Today is 20 January, and so far this year I've submitted an 9 1/2 page paper to QSIC 2010 and given a public talk (more on that later) for which the slides (available here) comprise about 1,500 words. I figure that puts me at about 10,000 words... and therefore behind by about 10,000 words (after this post, I'll still be behind by about 9,000 words).

One of the reasons I've fallen behind (and this is not meant to be an excuse) is my holiday travel - from 31 December to 13 January, I was in Tokyo and Taipei; both are very interesting places to visit, and Tokyo at the New Year is especially interesting because that's pretty much the most important holiday on the Japanese calendar (unlike much of the rest of East Asia, where the New Year on the Chinese calendar is the most important holiday). Thus, there are multiple special events: New Year's Eve celebrations, the Emperor's annual address to the people and the opening of the Imperial Palace grounds on 2 January, special food, and of course fukubukuro (lucky bags). Sadly, I missed my chance for a lucky bag at the Apple Store in Ginza by several hours... but I digress.

The reason for mentioning my holiday travel here at all is the public talk I gave at the Department of Computer Science and Information Engineering at National Taiwan University, entitled "Building Reliable Software with Applied Formal Methods: A Brief Overview". The talk was arranged when I told a contact of mine there that I was going to be in town, and fellow Caltech Ph.D. Hsuan-Tien Lin was my very gracious host. He told me that nobody in the department really had any experience with applied formal methods, so I prepared the talk as an overview of the tools and technologies we use in our verification-centric software engineering process (described in A Verification-centric Software Development Process for Java).

I initially expected that the talk would be received in much the same way as the (very similar) talk I gave at KITECH last August, where there was also little in the way of formal methods experience in the audience; at KITECH the response was one of polite interest and some good questions were asked, but there was an overall sense that they were not likely to adopt any of the techniques I described. Imagine my surprise, then, when I ended up eating lunch with (and later answering questions from) Yih-Kuen Tsay, a professor in NTU's Department of Information Management, who was not only familiar with JML and BON but also knew specifically about KindSoftware and the Mobius PVE! Dr. Tsay is, by a remarkable coincidence, an academic nephew of mine and Joe's; his Ph.D. advisor, Rajive Bagrodia, was one of Mani Chandy's students at UT Austin.

Dr. Tsay teaches courses entitled "Software Development Methods", "Software Specification and Verification", and "Automatic Verification" (among others). He told me that he had considered using JML and its related tools (including Mobius) for his courses but that, for several reasons (primarily having to do with the maturity of the tool support), he is currently using UML/OCL and Frama-C instead. We had a very interesting discussion where he told me that he would very much like to be able to use JML in his teaching, and I told him a bit about OpenJML and the current Mobius PVE release. He also stated that, for better or worse, UML and OCL are industry standards, and wondered if we (the JML community) had ever given any thought to attempting to standardize JML, or some subset of JML, through an international standards body.

I actually don't know whether the JML community has considered attempting a "JML Standard" or not, though I suspect not... and I also don't know whether it's a good idea, though again I suspect not. My instinct is that the standardization rabbit hole is one that we (as academics) would be better off not diving into, lest we never emerge to do non-standardization work; but at the same time, actually having a standard might encourage industrial tool support, which would facilitate said non-standardization work. Certainly, it's interesting to think about what the ramifications might be.

As for the talk itself, it went quite well. It was on 8 January, the last day of instruction for the autumn semester; they have 18-week semesters at NTU, which makes me incredibly jealous, as I always have to struggle to fit material into my 10-week quarters here at UWT. The students and faculty that attended seemed quite interested, and I have no doubt that if there were easy-to-use, modern tool support for the development method I described, many of them would try it and some would end up adopting it. If only there was a way to entice more people to contribute to OpenJML development...

Tuesday, January 19, 2010

Choosing a Supervisor, part 2

This is part two of the post, Choosing a Supervisor.
The second question I suggest you ask yourself with regards to choosing a supervisor is:
2. What is their impact and its timeliness?
Some researchers make their reputation once, so to speak, often early in their careers, then sort of just glide on it their whole lives.
Others reinvent themselves every 5-10 years, either by moving to new fields, writing high-impact books, making new fantastic discoveries or inventing useful new concepts. Perhaps they also co-create a startup and give technology transfer a try.
Do you mind working for someone who "was good back in the 80s" when it is 2010?

Labels: , ,

Sunday, January 17, 2010

Choosing a Supervisor, part 1

I have heard the metaphor that choosing a supervisor for your PhD is like choosing a spouse or a parent.
I think that, while both of these have a ring of truth to them, the better metaphor is that your supervisor is like choosing who you, the future you, in a decade or two's time.
When I first went to graduate school at the University of Massachusetts, Amherst, I was admitted as one of thirty-odd incoming miscellaneous postgraduate students, as is typical for the vast majority of large PhD-grading research-centric PhD programs in the U.S.A. The majority of time during our first two years was taking graduate courses in computing, being a Teaching Assistant (TA) and helping teach undergraduate (and sometimes graduate) courses, and preparing for our PhD exams and/or building our portfolios.
I was offered, and declined, a RA position by the LASER group, led by Profs. Lori Clarke, Lee Osterweil, and Jack Wileden. Other students in the group at the time included my distinguished colleagues Peri Tarr, Matt Dwyer, and Alex Wolf was a semi-recent graduate.
Their group, at least at the time, ran using the "cog-in-wheel" approach to research groups. In that approach, one of several about which I will write in a future post, students are "given" a project on which to work, and typically that project becomes a key component of their PhD. I was interested in finding my own topic, and at the time I was consumed with distributed systems and computer graphics, so I felt the LASER group was not a good match for me. In the end, after teaching for a year, I took a part-time job as a system administrator for the department and ended up doing an MSc with Chip Weems.
Had someone given me advice like that which I am writing now, I likely would have stayed in that group and gone on to have an equally good career, perhaps with fewer fits-and-starts (good) and less variety (bad).
Now, several universities, companies, degrees (and fifteen years) later, I have seen the full gamut of PhD supervisors, good and bad, and witnessed the inspiration and chaos they instill in research students, fresh and hoary.
Consequently, here are the questions that I think you should ask while choosing a supervisor. I'll post one a day for a week or two, then summarize in a final post.
1. What is his/her research reputation?
Do you want to work for a superstar or an everyman? If you do not really care about their research impact, then there isn't anything to investigate.
But, if you are interested in someone who is very good (i.e., with an international reputation, well-respected, makes an impact, etc.), look to see what program committees they are on. You want to see all "A" profile conferences and better. Look also if they are on the organizing committees of any long-lived high-profile, high-quality conferences. Also look to see if they are frequently invited to give kenotes at major conferences.
Do you want to work for a supervisor who has never been on a program committee, or has only been on committees of faceless conferences of dubious reputation?

Labels: , ,

Monday, January 04, 2010

Soundationalists

Soundness is a much-maligned metatheoretical property.
To the theoretician, soundness is an obvious given: if your theory is not formally sound, it is useless. To remind the reader, a logical theory is sound if and only if its inference rules prove only formulas that are valid with respect to its semantics. In most cases, this means that its rules have to preserve some notion of "truth," so that in each step of a proof, validity is preserved. Even wikipedia says, "Soundness is the most fundamental property in mathematical logic."
To the practitioner though, soundness is a desirable, but not necessarily mandatory, property enjoyed by few tools. Most tools that are "sound" end up being too difficult to use, as they have too many seemingly arbitrary restrictions due to assumptions made in defining the underlying theory (e.g., Spec#), or they require interactive use by experts (e.g., the LOOP tool).
In counterpoint to these "sound" tools (more on the use of those double quotes later), automated tools that require little intellectual or fiscal investment are the only tools that are broadly adopted and make an impact in recent years. But automation comes at a price, and for complex systems analysis, that price is usually soundness.
Unfortunately, this second perspective is lost on many theoreticians. They not only insist that soundness is a mandatory property for a logical theory—the quality research of mathematicians who develop deviant (non-classical) logics and paraconsistent logic systems notwithstanding— they also demand that all tools be sound and that any tool that is not sound is worthless. I call these kinds of researchers "soundationalists".
I have several problems with the soundationalist point of view.
Firstly, the soundationalist is forcing a value judgement on others. Inconsistent systems are all around us, and yet we, as humans, work with and within them everyday, with aplomb. Formal mathematical systems, whether they are logics or tools build upon logical foundations, are no different. The humans that work with such systems will adapt to their particular quirks, whether the quirks are harmless idiosyncrasies of operator precedence or more serious challenges, like the fact that a particular theory has dangerous corners that are not know to be sound. Remind the soundationalist of the consistency issue with set theory that they will quickly change the subject.
Secondly, I find that very few soundationalists have practical experience in building or using tools grounded in formal theoretical foundations. They are often the researchers who claim that translating a theory into practice is "just a matter of engineering." Thus, my problem is that the soundationalist has lost their connection with the "reality" of our discipline.
Alternatively, they are the ones that delegate the dirty job of realization (i.e., implementation) to their graduate students, never involving themselves in issues of architecture or programming, and then proclaim that their software is sound because their theory is sound.
Whenever I heard this claim I immediately presume the speaker does not know what they are talking about or they are trying to mislead me. The myriad of design and implementation choices, trade-offs, and challenges that accompany implementing any formally grounded software system leave dozens, if not hundreds, of soundness compromises to creep in. In my entire career I have yet to see the implementation of a sound formal system result in a sound formal tool with an accompanying mechanical proof of soundness of both claims.
Lastly, the soundationalist is ignoring reality. Non-trivial tools grounded in formal systems must make compromises if they are to be automated, since very few interesting and useful problems are decidable in the first place. A verification system that attempts to handle simple arithmetic is already over the precipice; one that claims to reason soundly about procedural or object-oriented programs—programs that are the concrete realizations of some of the most complex mathematics ever invented by man—is simply being specious.
I propose an alternative strategy to dealing with soundness.
First, by all means attempt to develop sound logical theories, but do not shirk your responsibility in proving your theory sound. A hand-waving proof in a short paper is persuading only to the persuaded. Mechanically formalize your theory in an appropriate logical framework and show how smart you are by closing the book on soundness by developing a watertight soundness proof in all its gory and glorious detail.
Secondly, learn a little bit about alternative, possibly unsound, strategies in developing and using non-traditional logics and logical frameworks. There are so many interesting pieces of work out there with broad application, ranging from Haack's survey of deviant logics [1] to paraconsistent logics' use in knowledge representation, abductive reasoning, and belief logics [2-6].
We need to see more mechanized work in these logic. I would love to use or develop a non-classical model checker. The application of paraconsistent reasoning to the hard problems plaguing (or, more commonly, ignored by) those working in the "semantic web" and "agents" research areas represents some interesting, novel, and non-trivial low-hanging fruit for the right research team.
Lastly, if you build a tool based upon a sound theory, (a) document every design and development decision you make that possibly compromises soundness and (b) make it a mandatory feature that your tool has a metatheoretical warning system, much like ESC/Java2 does [7]. Your users will be better informed about the reasoning that they are actually performing and they will better understand the concessions that they must make to use your tool for its intended purpose.
Also, of course, you will be more honest about your scientific product, and everyone likes a self-deprecating scientist that spends serious intellectual effort pointing out their own flaws and their competitors successes.
[1] Susan Haack, Deviant Logic. Cambridge University Press, 1974.
[2] G. Priest and R. Routley and J. Norman, Paraconsistent Logic: Essays on Inconsistent. Philosophia, 1989.
[3] Max Urchs, Essays on Non-Classical Logic, chapter "Recent Trends in Paraconsistent Logic." World Science Publishing, 1999.
[4] Diderik Batens, Frontiers of Paraconsistent Logic. Taylor and Francis, Inc., 2000.
[5] Diderik Batens, Paraconsistent Logic: Essays on the Inconsistent, chapter "Dynamic Dialectical Logics." Philosophia, 1989.
[6] C. Damasio and L. Pereira, Handbook of Defeasible Reasoning and Uncertainty Management Systems, chapter "A Survey of Paraconsistent Semantics for Logic Programs." Kluwer, 1998.
[7] Joseph Kiniry, Alan Morkan, and Barry Denby. "Soundness and Completeness Warnings in ESC/Java2". The 5th International Workshop on the Specification and Verification of Component-based Software (SAVCBS 2006). Portland, Oregon. November, 2006.

Labels: , ,

Sunday, January 03, 2010

Unshipped Software Does Not Exist

In much of computer science, at least the "systems" variety, an enormous amount of effort is spent designing, developing, and experimenting with software systems. Meaning, we write programs to make concrete our new ideas, show off our inventions, and validate our claims.
In the world of hard science, this engineering, albeit of the software kind, is more akin to experimental science than theoretical. We are like the physicists who build super-colliders, smash together atoms, and measure the results to validate or invalidate hypotheses posed by ourselves or others.
Despite what the theoretician might tell you, developing a complex software system is non-trivial and is more than "just a matter of engineering." It is often an incredibly complicated endeavor that continuously opens up (and sometimes closes) new research doors, most of which we never publish. In my experience, merely shipping a complex software system takes about the same amount of time as writing a conference paper.
And thus we have our dilemma: we have an experimental science, that of systems-based computer science, whose sole output, for the vast majority of researchers is exclusively the twelve page conference paper, in which only the Smallest Publishable Unit (SPU) of the work is described.
Nowhere is the full software system described so that others can replicate the "experiment." Though we are a discipline that thrives on abstraction, you essentially never see a full, or even partial, specification of a research software system.
And obtaining a copy of the concrete system designed and built by researchers over many man-months? Forget it. It is always "not quite done" or "needs to be cleaned up." Or perhaps it is "pending an IP review" by a technology transfer office. Heck, some researchers simply do not answer their emails or return phone calls when I ask them for a copy of their system!
More often, the reason scientists do not ship is more pragmatic and more cynical. Shipping software is simply not directly rewarded in nearly all Universities. Tenure reviews and promotion panels sometimes even state that developing and shipping software is a waste of time, time better spent on writing peer-reviewed papers.
In my view, this situation is untenable and this behavior is unforgivable. This is not legitimate science or engineering.
If you do not ship a research software system, it does not exist.
Like the physical sciences, where one cannot publish a paper unless an experiment is described in excruciating detail and data is often made publicly available, I believe that one should not be permitted to publish results based upon an unshipped and undescribed system.
When I review research papers that discuss results coupled to software systems, the first thing I search for in the PDF is "http." If I cannot find a mention of how and where to download the system in question, my warning bells go off. If a Google search turns up nada, I reject the paper, as simple as that. Hollow promises of shipping after publication or at some later date are ignored, as they are so often unfulfilled.
I also know from personal experience how rewarding developing and shipping a software system can be.
You are opening your heart and head to the world by showing everyone exactly what you are made of. Sure, you may have fewer papers than some competitors, but your limited time budget for writing means that you must more tightly prioritize writing goals and publication targets.
The notion of SPU goes out the window as you want to put as much into each research paper as you can fit, rather than as little as will be accepted for publication.
Finally, if you develop systems that are useful and usable, you gain an audience of industry and academic users that is typically at least as large as the number of people that would have read that conference paper or two you did not write, and typically orders of magnitude larger.
My advice to the young PhD Computer Science student? Ship your software; you won't regret it.

Labels:

Saturday, January 02, 2010

RSpec

I enjoy discovering when old, good ideas from the research community eventually trickle their way out into common practice, but sometimes what you discover surprises you.
For example, contracts are a great idea that should see wider use, especially in languages that provided assertions from day one. But after you discover the nth framework/tool for providing contracts in a language like C++ or Python that does not support inheritance or visibility, one gets a little deflated.
This week I accidentally came across RSpec from the Ruby community. It caught my eye initially because it is described as a "behavior-driven development tool" and one of its popular tutorials states on line one: "Behavior Driven Development is specifying how your application should work, rather than verifying that it works.", this sounds like a framework for me. Moreover, since contracts are at the core of behavioral interface specification languages (BISLs) like the Java Modeling Language (JML), and I'm "one of the JML guys," then I should get excited about RSpec, or at least learn something from it. But before I dig into RSpec, let's reflect upon Ruby a bit.
I learned Ruby when it had just "leaked" out of Japan many years ago. The only English documentation on the language then was a fragment of the API docs, and thus I had to learn the language by reading other peoples' code. I like Ruby. It purposefully or accidentally intelligently synthesizes some of the best ideas of Smalltalk and object-based languages with a prototype-based feel. By prototype-based feel I mean languages like JavaScript, Tcl and, I'm told, used in several of the scripting languages that I mentioned yesterday, namely Io, Logtalk, Lua, Omega, and REBOL. My experiences with these kinds of languages derives from the literature, namely Abadi and Cardelli's "Theory of Objects" and papers about the Self language. The fact that it gives you some access to its metaclass system, a la Smalltalk, in a relatively clean API unlike, say, the horrid APIs of Python and Perl, is also compelling.
Consequently, I have written a few thousand lines of Ruby, including some of the server-side processing for my research group's website, and thought it would be nice to see a clean OO scripting language like Ruby catch on (as it has, in spades).
So, I hear you say: "Hey, a simple OO language with a clean metaobject framework is ripe for the application or dependable software engineering principles, Joe!" I would say you were right, so lets see what has happened in the World of Ruby... so, back to RSpec.
The first thing to note is, while the 'B' in "BDD" means "behavior," it is not literally in the sense of BISLs, but instead is the "behavior" of the "Agile" community. *sigh* This already starts to worry me, but lets not throw the baby out with the bathwater, because sometimes riding on the coattails of a populist movement like "agile programming" (or "aspects" or "Java," for that matter!) is just a smart mechanism to effect change.
The API and common use of RSpec guides a developer down the path of connecting informal English sentences using modal verbs like "must" and "should" and code fragments which interpret the informal specification. Thus, "behavior" in this context is the informal, manual specification and linking of traditional requirements and hand-written unit tests.
Now, anyone familiar with my work in BON and verification-centric development will know that I think codifying requirements, domain analysis, and features in structured English is a Good Thing. And we have been developing a formal refinement between informal specifications in English and formal artifacts like requirements, concepts, tests, types, and assertions (look for a paper on this in 2010). So the juxtaposition of English and code is unsurprising to me.
The codification of assertions in the API is also interesting. Having methods like "should" and "should_not" are akin to jUnit methods like "assertTrue" and "assertFalse," though fit better with the vernacular of the domain. Permitting the definition of pre and postconditions of unit tests via "before" and "after" methods, akin to aspects and straight from the world of CLOS and MOP, is also nice to see. There is also integrated support for mock objects and the use of lambda expressions to talks about state in the pre-state of a method call is cute as well.
So in the end, I think RSpec is a pretty nice framework for specifying the behavior of Ruby code, but only if you are willing to accept the fundamental testing premise of agile programming: that hand-written unit tests should specify the behavior of a system. My criticisms of this approach are not unfamiliar. Hand-written executable tests are only maintainable at high-cost and are expensive to write early if one does not have (1) a fairly solid understanding of a domain and (2) pleasant customers who do not change requirements all of the time.
In other words, I am still unconvinced that in the key areas where agile programming is supposed to shine their fundamental tenant, that of test-driven development, holds true. If you are an agile practitioner and have evidence of this claim, please speak up!
I will write more on RSpec later this year after I get a chance to really take it for a test drive.

Labels: , ,

Friday, January 01, 2010

New Year's Resolutions

One of my New Year's resolutions is to do more "public" writing in 2010.
As it is, I do a lot of "private" writing in the form of coursework, exams, slides, reports for grants, grant writing, thousands of emails, consulting, etc., and only a moderate amount of "public" writing like public reports, peer-reviewed papers, software manuals, web pages, and an occasional blog post. I have tried (and succeeded in?) drafting my friend and colleague Dan Zimmerman into this task, proposing that we each write or edit at least 1,000 public words per day. Every day I do not reach this goal I have promised to donate 1,000 cents (i.e., $10) to the GOP. As you can imagine, this punishment is very motivating.
Up until about 2007, I used to learn a new programming language every few months. Consequently, I now know something between 40 and 50 languages. I want to reboot that effort, as 2008 and 2009 were exciting years for new programming languages. (In recent years my attention has been more focused on revision control systems, higher-order and first-order theorem provers, and new logics.)
To "know" a language means that I: (1) can read it immediately, (2) have written at least several lines of non-trivial code in the language and, (3) after a few hours refreshing myself, if I have not written a program in a language for several years, I can write arbitrary programs in the language.
The high-profile languages that are at the top of my queue are Clojure, Go, Haskell, and Scala. I can read all four, but have never written programs in any of them. I also know quite a bit of the theory behind each language, and have reviewed research papers using them, but I need to bury myself in them for a few weeks to really get my money's-worth.
Performing a search on "programming," "language," and "compiler" in MacPorts also reveals a whole bevy of languages and compilers that I do not "know": arc, argh!, aspectj, bc, boo, clojure, cyclone, gdc/d-mode, embryo, erlang, ferite, ficl, fsharp, gforth, gnudatalanguage, gri, groovy, guile, ici, icon, Io, jekyll, logtalk, lua, mawk, mercury, mozart, ncarg, nesc, nice, nu, Omega, oorexx, pike, pure, q, qore, R, rb-kwartz, rexx, scala, shakespeare, slang, snobol4, squirrel, strategoxt, xotcl, yabasic, 4th, bf2c, cm3, distcc, gpc34, gprolog, gwydion-dylan, ikiwiki, inform, mono-basic, newt0, nhc98, nqc, objc, pnet, ragel, swi-prolog, tom, vala, yap.
I have a special interest in the scientific programming languages like gri, ncarg, and R due to a grant proposal I am working on, so they will get earlier attention. Historic languages like Dylan, Erlang, Forth, Icon, and SNOBOL are also eye-catching. Languages that seem to often come up in online discussion like Cyclone, D, F#, Guile, Lua, and Pike are are prioritized. Finally, I use some of these systems regularly, like bc, but not to their full extent. If I can get
through half a dozen of these this year, I will be happy.
What will I do with these new languages? On thing I am debating is writing a minimal extended static checker using FreeBoogie for each in the language itself. Of course, there are many other projects constantly happening in my research group, so there are undoubtedly implementation opportunities there too.
I will report on my progress on these efforts in this blog. I will also be using this blog to reflect upon new theories, tools, and technologies that I come across this year.
After all, why not take a big move to a whole new university as an opportunity to reinvent oneself via self-reflection?

Labels: