A beautiful Tucson sunset tonight + iPhone portrait mode + a smidge of the secret Adobe Lightroom sauce = impromptu photo shoot!
Please welcome the new member of XWC Lab, a new LAB member! Meet Lani, my new 2 year old yellow lab! Lani came to me via friend of a friend who was looking to rehome her.
She couldn’t be a more perfect dog! She’s still pretty puppy-like, has no bad habits at all, loves people, loves dogs, loves to chase the ball and bite the hose water. As I type she’s holed up in the bedroom devouring a rawhide chew. We even made a new dog friend in the neighborhood and they’re already besties.
Here’s a few more.
And here’s some derp:
Song of the day is in honor of Lani’s new name, which is Hawaiian for “Star” (I like stars ok?). Iz’s version of “In This Life”.
Resident pyramid expert Dr. Lauren Schatz defended her thesis work today, despite pandemic pandemonium. The field has decided to accept her (with minor revisions), and she will be joining the Air Force Research Laboratory in Albuquerque, New Mexico later this year.
We’ll miss her a lot, but every wavefront sensed by MagAO-X will have her fingerprints on it. Well, not literally, that’d be bad optical science-ing. But you know what I mean.
We had all kind-of forgotten how to do the in-person rituals of academia, but we “reserved” a “conference room” and used a “projector.” We also set up Zoom, for good measure (and for everyone beyond the tiny occupancy limit imposed by These Unprecedented Times).
Best wishes in all your future endeavors, Lauren!
Song of the Day
You’ll find in time All the answers that you seek Have been sitting there just waiting to be seen Take away your pride and take away your grief And you’ll finally be right where you need to be
One of the most painful things I’ve had to do in graduate school is writing. It’s probably the worst necessary evil in academia. I’ll take documentation and giving presentations any day over writing papers. I’ve finished writing the first full draft of the MagAO-X Fresnel modeling paper and it’s going through the comment cycles. But this entry is not about academic writing, it’s about a writing the world as a whole has slowly started to forget about.
My first material love is paper stationery. It manifested during my childhood in the Philippines where I was first exposed to paper stationery shops. It’s a passion that has only evolved in the past two decades, even in the face of technology advancement. I exclusively use 0.38mm and 0.5mm gel and ballpoint pens because it keeps my handwriting the cleanest. I’ve been using the same Uni Jetstream 4-in-1 0.38mm ballpoint multipen and pencil my entire PhD because it writes so smoothly. I recently bought my first fountain pen and I’m completely hooked. There’s particular notebooks I purchase because the paper is smooth and sturdy with just the right level of brightness and no ink bleed through. I discovered dot grid paper a couple years ago when I tried out bullet journaling (bujo). I’ve since stopped maintaining a bujo, but dot grid paper is my standard preference for personal and research notekeeping. Paper stationery products are hit or miss and I’m grateful for Kinokuniya Bookstore in Little Tokyo, Los Angeles for being my first resource into exploring quality products.
While paper stationery usage is one thing, letter writing is a different game. Writing and sending cards occasionally was nothing new for me, but it became a regular thing when I moved to Tucson for graduate school. I was transitioning into a distance relationship and I missed a lot of my friends and family. I sent cards as a form of encouragement for myself and a lot of my friends who started their grad programs. Over time, card writing became my personal creative outlet.
I have maintained a few pen pal correspondences through the years. We’re never quite consistent; we regularly fall off the wagon from our correspondence streak as we each get busy and that’s perfectly fine. There’s no time constraints on these things and we help each other get back on track. We don’t write much, just little highlights of things we want to share with each other. I wrote to one of my pen pals about how excited I was about my monstera plant’s new unrolled leaf had not 1 but 2 holes in it. I’ve amassed so many cards from friends through my PhD that I’ve had to buy another decorative storage box to house them all.
My favorite part about letter writing is adding cute card flair. I love finding out about new stamps coming out and using them in my correspondences. My current favorite stamp is a lenticular printed T-rex. I get to learn cool stuff from stamps, such as Dr. Chien-Shiung Wu and her work in nuclear physics (my excitement for female Asian in STEM representation is off the charts). All my envelopes get sent with some sticker by the recipient address and sealed with washi tape.
Letter writing has been my private getaway to detatch from a computer screen and hang out at local cafes. Just me, a drink with a snack, a small pile of cards, my favorite pen, and listening to music on my headset. It has brought me to appreciate taking in my environment. Of all the cafes I’ve visted in Tucson, Ren’s Coffeehouse in St. Philip’s Plaza is my favorite letter writing hangout (mostly because they also serve food).
In 2001, the USPS dedicated April as National Card and Letter Writing month with the goal “to raise awareness of the importance and historical significance of card and letter writing”. To challenge myself, I made a very lofty goal: write and send 100 postcards to people I know through the month of April. I chose postcards because if I’m going to mail 100 of something, I’m going to do it with the cheapest postage stamp. (Jared is only capable of paying me so much with the graduate student salary limits.)
It’s an arduous task, but I manage it in stages through the week. I eagerly look forward to spending my Friday or Saturday evening binge writing through postcard bundles. To be honest, I believe it’s been the only reason why I’ve been able to not lose my mind with writing my papers. Writing these cards is an enjoyable process with the right setup. Here’s some progress photos through the month:
I’m happy to say that as of two days ago, I met my goal and completed my 100th postcard. (If you want to be fully technical about it, I mailed them out today.) It was fun to pick card designs for the recipients and allocate dedicated time away from the computer. I’ll probably do something like this again, but not to the extent of 100 postcards within only 1 month. It’s been a rewarding experience all around. The best part is that I’m starting a regular pen pal correspondence with 2 more friends!
Letter writing is a bit of a dying activity in this technological age. Despite that, I believe it’s a worthy pursuit. While email, texting, and social media allows for easy and quick access, there’s something extra special about maintaining a snail mail correspondence. After all, isn’t it nice to receive a surprise letter in your mailbox? The sillier the card, the better.
SONG OF THE DAY
Of course I’m going to choose a song by The Postal Service. Where were you in 2003? Because this song was EVERYWHERE.
Tonight was supposed to be MagAO-X’s third night on sky with the Magellan Clay telescope in Chile, but due to the pandemic, MagAO-X is still sitting in the lab in Tucson. It’s sad that we haven’t been to Chile since 2019, but we have been making the most of the time that we do have with MagAO-X in the lab. In fact, we have even started to use MagAO-X on a different telescope! …sort of.
Meet the Giant Magellan Telescope (GMT) simulator, otherwise known as the “High Contrast Adaptive Optics Testbed” (HCAT). This testbed sits in the room next-door to MagAO-X, and its job is to trick MagAO-X into thinking that it is actually observing at the Giant Magellan Telescope.
The purpose of HCAT is to test things for the GMT, hence why it is called a “testbed.” Specifically, we want to see if an extreme adaptive optics instrument like MagAO-X would work with the GMT and its unique seven mirror design.
We have been working hard over the past several months to build and align the GMT simulator with MagAO-X, and just this week we have finally achieved the first closed-loop experiment with MagAO-X! Below is a video of our first closed loop experiment:
In the video shown above, you can see the image go from a blurry mess (because of simulated turbulence) to a corrected image (thanks to the adaptive optics system). But the corrected image may look a little strange to some. This is because the GMT simulator pupil is actually only four GMT segments instead of seven. So the result is a strange, asymmetric-looking image. Below is a simulation of what the image of a star looks like for our 4-segment GMT simulator versus the actual 7-segment GMT. We use these simulated images as a reference to know what we are looking for.
This was a huge step for the GMT because now we have a real GMT extreme adaptive optics simulator working in the lab. We will start to do some really cool experiments with piston sensing and AO control over the next couple of years which will be crucial for the success of the GMT and the search for life in extraterrestrial solar systems with GMagAO-X.
Steward Observatory and Department of Astronomy tradition is to spend valuable grad student time concocting plans to amuse, vex, or embarrass the principal investigator.
Note to P.I.: This also means any embarrassing mistakes you’ve seen me make have been absolutely intentional.
We call these pranks, though I’m not sure that’s entirely accurate. In any case, we cannot hope to rival the time someone used computer administrator access to bamboozle a CNN-addicted advisor with a fake homepage. I think of them more as artistic expressions of the self, mediated through the constraints of graduate school and the cult of personality inherent in any advising relationship.
There was that one time that priceless works of art appeared to decorate the office while its occupant was abroad in Chile, and, more recently, the Merry MagAO-Xmas display. Both of these relied on having a group of graduate students with Photoshop™ skills to render 2D images that reveal the essential nature of the subject.
For the next iteration, we had to step things up. Kick it up a notch.Take things into a whole new dimension. Could we photoshop our advisor into… a movie? Haha, just kidding! Even a short clip would be many hundreds of frames. Unless…
What if there were a tool that leveraged image processing, GPU programming, and machine learning to automate this for us? We’re high-contrast imagers; we know these things. I immediately set to work on a literature review.
It just so happened that a fellow graduate student had (unknowingly) answered our prayers in “Motion-supervised Co-Part Segmentation” by Aliaksandr Siarohin et al. from ICPR 2021. Or, more importantly, the associated open-source code. Armed with a bottom-shelf NVIDIA GPU and a refurbished Dell workstation, I dug in to the code. It seemed like I’d be able to get a good “face swap,” but there was one nagging problem.
What does my advisor’s face look like?
In pre-COVID times, one would have simply ambushed him with a camera sprinted off before he realized what happened. Confined to my home, I was forced to rely on the collective memory of the research group: in other words, this very blog.
I quickly discovered that the meek Dr. Males was camera-shy. How else does one explain his tendency to shrink into the backs of group photos? Or to grace us with only a partialmug? It’s almost as if he doesn’t even want a deep-fake model trained on his appearance! Nevertheless, I found a handful of suitable photos among the thousands, and I moved on to the next question:
Into which clip shall I face-swap my advisor?
After discounting Top Gun (for a lack of suitable pithy quote clips on YouTube), I eventually settled on this one:
“You look terrible. I want you to eat. I want you to rest well.”
Who wouldn’t want to hear that from their advisor? (Maybe we don’t want to hear the first part, but let’s not lie to ourselves.)
Source material in hand, I fired up the deepfake machine, and…
Yikes. Undaunted, I continued my analysis of the archival image data.
It turned out that Jaredification performed better when the Jared used was clean-shaven, limiting us to vintage blog photography. I found what I was looking for in this post from 2012 and gave it another go.
Ultimately, I wouldn’t say this was an unqualified success (except in that I’m “unqualified” to do deep learning on videos). There didn’t seem to be any rhyme or reason to which photos segmented well and which did not, but I was unable to acquire additional data without tipping off the subject to what I was doing.
Further investigation is needed, promising directions have been identified, funding priorities elucidated, etc. Until then, it helps if you just kind of squint at it.