Getting my Google Tensorflow Developer Certificate | by Brad Nguyen


The reason why I got one, and the resources that helped me

I recently obtained my Tensorflow Developer Certificate and was admitted to the Tensorflow Developer Network! It was a bit of a challenge — sitting through an 5-hour exam (it’s been a longgg time since I did that) to solve 5 problems across regression, computer vision, NLP and time series forecasting, all to be done with deep neural networks using TF 2.0.

While it’s great getting the cert done, the most valuable thing has been the learning. In this post I share about some of the learning and tips on my journey.

Images created by the author

Over the years I have accumulated a number of different project experiences (both work and personal ones) with deep neural networks using Tensorflow (TF) or Pytorch. I’m also quite fortunate to be part of an active ML community discussing and learning together on the subject at ThoughtWorks Australia, and mentored by mat kelcey — ones of the early adopters of TF from his days at Google Brain and an amazing person with sharing his knowledge.

Despite all of these, I still feel my knowledge is rather ad hoc and fragmented. There is a bit of imposter syndrome too. I have no idea if I’m only scratching the surface and maybe missing some significant knowledge on what the TF framework is offering (quite probably), or what else it is capable of.

Going for the TF certificate is a good way to get a comprehensive knowledge coverage on the subject.

A good starting point is the TF’s Candidate handbook . I’m sold after reading the Skills checklist section. Clearly there’s a lot to be learnt. It covers a full range of topics from working with traditional tabular data, computer vision, NLP, data augmentation, to forecasting models — all with deep learning and TF. And some data wrangling skills, too.

Finally, I’m getting some extra motivation since moving to a new role recently with the NAB Enterprise data science team. We do have some exciting projects and many people with strong research or industry backgrounds. While my role is a mix of technical and people leadership, I’m keen to keep getting myself updated with technical capabilities, to provide useful technical input for the team, as well as helping guide the team on what’s the latest on the AI/ML tech radar.

I didn’t have to re-invent the wheel much here, but just follow the advice from the community and the official guidebook. The most important learning resources have been:

  • The TF course by Andrei Neagoie & Daniel Bourke on Udemy. The accompanying resources are excellent — highly recommend to practise all chapters from their Github repo. There are many tips with TF and deep learning in general too.
  • The official recommended course on Coursera . It is rather basic though — it’s meant to be a 4-week course, however once you’re more or less proficient with TF — it can be done in a day (I did complete it in a day). Nice to have as a good check point (and another certificate to collect for fun 🙂
  • Having a good memory muscle with TF code is really useful to push through the exam (it’s a 5-hour exam) — the best way to do is to write things from scratch with your toy exam. Just lots of model definition, model.compile() and model.fit() over and over again :-). I did some fun work building MLs to recognise people in my family (you can do it locally so very low privacy risk).
  • I realise that I learned the most through practising on my own problems. I make heaps of mistakes when not following the textbook — and through these mistakes I’ve become much more aware of the nuances of all the things in building DNNs. Like, how different is it between having and not having rescaling of input features?

Finally, a few extra tips:

  • The first one is to research for tips from all the people who have taken the exam before! For example, here’s a useful one.
  • It’s good to have some helper functions ready, which can help with things like check-pointing the best trained model, early stopping, tensorboard callbacks, to name a few.
  • The exam will be done in PyCharm (I’m a VS Code person but finding the transition isn’t too tricky. Same good old JetBrains IDE). Make sure you practice on this environment well enough before the exam.
  • I found it’s really useful to have a local setup with all required packages and dependencies in the dev environment. Don’t just practice stuff in Google Colab — it’s important to ensure stuff can run on your machine getting the same consistent results and performance (like training speed — to verify that’s it’s all tuned for correctly). More specifically, I used:
  • Pyenv for Python runtime (3.8.0 to be used) / poetry for dependency management (but the exam will also required dependencies with specific versions to be installed directly to their PyCharm plugin — so make sure you try that as well).
  • As you develop locally, it’s also handy to make sure you can launch jupyter lab within that local environment (with the same set of dependencies). It’s neat if you want to try things quickly / interactively with the benefit of having an active stateful session.
  • 5-hour exam is tricky! Make sure you have light lunch or dinner options sorted 🙂

Apparently Mat said Jax! For me, going through this reminds me to think of learning as marathons and not a sprint. A good learning experience is a great positive reinforcement to help me along that way.



Source link

Be the first to comment

Leave a Reply

Your email address will not be published.


*