You'd want to create and get merged patches in the following order:
1) Be able to store just block headers in the blkXXXX.dat files instead of full block contents. At this point you are still *downloading* full blocks, but they are not being stored. The contents are still sent to the wallet for extracting relevant transactions though (see SyncWithWallets). You also need to disable listening and addr announcements to the P2P network at this point. You need to be able to re-org and do all the usual things without storing block contents. You also need to short-circuit the leveldbs so they aren't created or used. All that needs to be unit tested. You need to also rewrite the mempool logic so it throws out irrelevant transactions. The RPC interface needs to adjust itself so you can't try to start mining, query the utxo set, etc.
At this point you have an SPV node, albeit one that still downloads the entire block chain. However total disk storage used will be much lower. Getting this written and reviewed is a big chunk of work but is the hardest part. Once it's done you can breath easy.
2) Next step, use getheaders to catch up with the chain until the min(wallet birthdays) is reached. You can see in Satoshi's patch where he adds support for receiving "headers" messages. Because key times are recorded as dates and you don't know the dates of blocks in advance, you need to download headers until you see one that goes past the key birthday minus some slack period, then throw out the headers you downloaded and switch to downloading full blocks again from that point onwards.
3) Next step, implement client side support for Bloom filtering. Switch from downloading full blocks to filteredblocks, verify the Merkle branches then apply them to the wallet. Watch out for accidental re-orderings of transactions here from block order (e.g. if you accidentally insert them into a std::map or other unordered collection it can lead to bugs). Come up with some way to decide on a FP rate. Probably you want a fairly high FP rate for desktop wallets.
4) Next step (optional), implement monitoring of broadcast propagation for transactions that are received. SPV clients cannot verify unconfirmed transactions so you can either just give up entirely and accept any old garbage, or assume a non-MITMd internet connection and use network propagation as a rough proxy for "likely to be valid and mined upon".
4) Optimize!
How much you need to optimize really depends on a lot of things. I found that to be competitive with Electrum/
blockchain.info I had to do a ton of optimizations including very aggressive checkpointing so new users don't have to download more than a month or twos worth of headers, as downloading all the headers was becoming a bottleneck. You'd need to download about 16mb+ of data at the moment to grab all the headers and on a weakass mobile phone with a weak Dalvik VM and 3G internet this was way too much. I also had to spend some time profiling to ensure we weren't accidentally thrashing the UI due to too-fast updates, we weren't bottlenecking on updating last seen block data in the wallet, we weren't accidentally de/reserializing messages redundantly etc.
After about 3-4 evenings of non-stop profiling and optimising I ended up with a relatively flat profile whilst doing initial catchup and chain sync. On a desktop I bet you can get away with much less optimisation because your CPUs, network and disk tend to be much stronger.