I am excited to announce that I finished this LFS211 - Linux Networking and Administration course from the Linux Foundation in preparation for my planned LFCE exam.
I just realize that I can upload dataset and work on it (at least on the data ingestion and analysis phase) right within the 1 hour credits given from within the Udacity workspace. So I did. And it was a lot of fun.
I participated in the #sg_spaic weekly meeting where someone presented his amazing AI robot project.
I also won third place in draw and guess game. Not bad! :D
With only 10 days before the deadline, I made a pitch about my project entitled ECG Heartbeat Categorization Prediction Project using the Azure Machine Learning Platform. My project proposal goes like this:
One of the sectors that deep learning has made a tremendous impact is in healthcare. In fact, researches on the medical sector involve deep learning methods in their analyses. Anyone could agree that it is imperative that disease diagnosis early on the treatment is equally if not more important as the treatment itself. Therefore, these advancements when applied to the health sector are really helpful in saving people’s lives since one of the benfits of a carefully trained deep learning model is its accuracy and speed in generating results. Truly, this will not only benefit patients but doctors as well.
Out of all the diseases that need utmost priority of the medical sector as well as researchers in the field of healthcare, heart disease is the one that needs careful attention. In the United States, heart disease is the leading cause of death for men, women and primary racial and ethnic groups. Alarmingly, one person dies every 37 seconds in the United States from heart disease and about 647,000 Americans die from heart disease each year. Fortunately, milestones of deep learning and artificial intelligence have lead several researchers to create fast and accurate models for heart disease detection.
One of the ways to do this is through the use of previously-available digitized electrocardiogram (ECG/EKG) data and apply deep learning methods to classify specific heart disease [5]. With the already available dataset and publicly available paper tackling heart disease detection, the author decided to implement a deep learning model that would attain the same level of reported accuracy if not more using a state-of-the-art deep learning framework.
I hope this will be shortlisted as I am working on this alone.
I recently activated my Azure account using my company credits to explore machine learning using AML platform. Now my interests are running PyTorch models right within the interface. Turns out, just like SageMaker, you can also submit training jobs by provisioning GPU enabled virtual machines.
Their tutorial is easy so I am saving this for reference.
I am interested in healthcare projects and there are incredible datasets present in Kaggle which were the basis of multiple state-of-the-art researches and models. I am in the process of perusing datasets to where I can apply my knowledge in data analysis and modeling using Azure Machine Learning.
Finally, I found this gem of a dataset called ECG Heartbeat Categorization Dataset. This will be the basis for my project.
After a month of preparation, I took my first Linux Foundation examination. I prepared my workstation to be clean, reviewed the materials profusely, relaxed for a little bit and prayed.
Sunday at 8:00 PM came and I logged in to the PSI online testing center. And I failed miserably due to nervousness. It was 27 questions and I only answered 20 of them due to time constraints.
Let me tell you, it is not the same as other exams like CCNA or Azure certifications that I experienced. You need to incredibly memorize the commands and show it through the exam. I am pretty sure I failed but that is ok. I will prepare and try again. Incredibly, the study jams helped me focus. Now I have an idea how to deal with it next time.
As part of the #sg_spaic study group, I participated in our weekly meeting where Sadmi talked about the Internet of Things. It was a lot of fun.
I was wondering why since yesterday, I have not received my email yet, only to realize that I missed a couple of questions in the module. With the help of this list, I marked them all as complete.
In other news, I successfully scheduled my exam this coming Sunday. I hope works well.
I recently got my Azure account enabled and I am excited to share that I am working on a data exploration project to continue my project here.
Still preparing for LFCS certification.
Participated in the second study jam to finish review of my upcoming LFCS certification. It was a very productive one. Finally, I finished another run-through of LFS101x - Introduction to Linux from edX to make find, sed, awk, locate and bash scripting stick. You know what they say, learn by repetition.
Participated in the recent #project_ideas and joined A Journey into ELK Cluster. I am using Algolia now for our prototype but honestly, this is a good alternative in the long run. I really need to learn this and this presentation is good way to start.
After about a month of exploration, I finally finished the course. My plan a head is to review the laboratories and explore projects using the Azure Machine Learning platform.
Reviewed Lesson 7: Responsible AI. I realized that this is the last lesson for this challenge course. Will do another rundown of the course tomorrow.
I finished all the labs for LFS201 after 3 days of exploration.
Today I learned Microsoft AI principles, Modern Transparency and Explainability and Model Fairness. This is a pretty short chapter but an important chapter to discuss nonetheless.
Here are my opinions about this chapter:
I am so glad that Microsoft commits responsibility in providing these set of tools. I hope other companies will follow suit as well.
Finished the labs in Operationalizing Models and Training and Deploying from a compute instance labs. This day, I officially finished lesson 6. One more pass of this chapter and I will be ready to move on to Lesson 7.
Each lab is a great combination of easy and informative. This is getting more palatable solution for cloud ML training than SageMaker.
Finally, check out my shiny badge from Linux Foundation:
Continued on lesson 6 and finished lessons for basic and advanced modeling. Incredibly, this module introduced MLOps which to put it simply is DevOps when applied to Machine Learning. As they are different sets of workflow combined, machine learning engineers will be able to handle the complexities of both the environment requirements, and of course, model requirements.
I started lesson 6 and finished modules on how to manage compute resources for training and inference. It is the same as SageMaker's paradigm where we have separate compute instances for both training and inference which is a good thing since these resources can only be activated whenever you need it so it falls under the pay for what you use category.
In a different note, I finished half of the laboratories for my LFCS exam this coming Saturday.
I finished lesson 5 and now I did another review of lesson 5 before proceeding to lesson 6. I am slow but I am getting there.
Also, I participated in the weekly #sg_spaic and learned new things PowerBI and Tableau.
I finished the anomaly detection and forecasting labs of lesson 5.
And on a different note, after 8 days of extensive training, I managed to finish my Essentials of Linux System Administration. I am glad to share my achievement, here is my certificate:
I also managed to setup my workstation to enable Debian and RHEL distribution for exploring labs in preparation for the upcoming examination.
To make sure that I won't miss the fun in the course, I returned from my Linux journey to reviewing Lesson 5. I am now on Similarity Learning which is a regression modeling to compare streams of data. It is unthinkable 10 years ago to pull this off but the power of hardware today plus the feats in computer science helped pave the way to extract meaningful insights in a significantly lower amount of time with improved performance.
I am already at my 71% completion in the Essentials of Linux System Administration. And here are the things that I learned:
I am targeting to finish the whole lesson today in spite of work. After that, I need to streamline my progress in Azure Machine Learning. Hope I achieved that.
Finished the final laboratory, Training a simple Clustering Model. I also had a quick rundown of Lesson 4 and I think, I am ready to proceed at lesson 5.
I also finished the study jam with considerable progress. Although not in AML but in preparation to my LFCS certification due August 29.
I signed up for the study jam session to continue my lessons in supervised and unsupervised learning. And guess what? It works. I am on my last laboratory left before I do a quick run-down and proceed to Lesson 5.
I decided to pause a bit with AML and focus on my Linux course. I am so glad to announce that I am currently at 28% completion rate in just one day! Look!
In spite of a full day's work, I managed to cover lessons off specific package managements systems from RHEL, SUSE, and Debian Distributions, not to mention their low-level counterparts (RPM and DPKG).
Also, system administration is getting real as I am starting System Monitoring Lesson and myriad of tools that can monitor, processes, memory, I/O and network.
Specific chapters that I learned today include:
I also documented my experience here. Check it out, won't you?
I got a hands-on with the two labs in this lesson, two class classifiers and multi-class algorithms. I was also acquainted (again) with regression algorithms as part of the final lessons in my day 8 review where I was introduced to discrete and continuous supervised algorithms.
Reviewed data importing and transformation, feature engineering, monitoring data drift with the added functionality of versioning datasets right within the platform, difference between continuous and discrete outputs of supervised machine learning and the evaluation metrics depending on the classification algorithm.
Finally I finished the rest of the labs particularly training and evaluating a model using different evaluation metrics, simulating classical machine learning model using boosted decision tree for binary classification and the exciting new feature of Azure Machine Learning which is AutoML. I will definitely explore this topic more.
On a different note, finally, after about 10 days of staggered progress, I am glad to announce that I already finished the course with the last chapter about Local Security Principles which includes interesting topics such as (1) Linux Security, (2) the root privileges and when to reflect it among regular users, (3) limiting hardware access, (4) working with updates and (5) securing the boot process and hardware resources. It was a great journey. I may apply for a financial aid for this one to pursue a verified certificate.
Now this is new, in the current beta version of the AML platform, we can version the dataset. Back when I was using it in my Microsoft Professional Program classes, you need to create separate projects for separate versions of the dataset. Or at least renaming the files that are used in modeling. I am really enjoying my experience in the course. I wish I had a large screen to take them all in without window resizing. :D
I finished the first laboratory of Lesson 3: Model Training where I explored the Extract-Transform-Load pipeline right within Azure Machine Learnig Platform. Like I said, it is the same as before. But I am interested in custom functions for ETL (either in Python or R).
Just yesterday, I finished the whole lesson 2. But to get it fully, I decided to have a comprehensive run-down of the lesson again. And it is nice to play with the labs, albeit it resets after an hour.
On a very different note, I decided to continue on my foray to re-learn things Linux as I revealed on day 1. It may not be related to this challenge but it is my milestone anyway. So far, I am really astounded as to how much have I missed especially on the text editors and and user environment aspects of Linux which are lessons 11 and 12 of edX's Introduction to Linux created by the Linux Foundation.
Chapter 11 involves the use of editors for myriad of tasks. I am a fan of nano and gedit but I now appreciate why other power users are preferring vi and emacs to do some file manipulation heavy lifting apart from them being lightweight, console-based and predated the text editors introduced later.
Chapter 12 in the other hand involves accounts and user groups, how to change file ownership and permissions, assign users and groups to files and directories recursively and the history | grep combo. I learned most of these in my daily work espcially during CI/CD setup tasks (deploying pertinent files to http servers with wrong directory permissions challenged me before but not until I knew several ownership and modification of users/groups).
I successfully finished lesson 2 which is a bit of a nostalgia for me. You see, I was accuustomed to the drag-and-drop functionality of Azure Machine Learning in the early days (2017) where I was exploring the wonders of machine learning. It is nice to see they retained the same for the newer environment.
Look at the newer version of Azure Machine Learning now, it is pretty neat. And with a compact integration with the Udacity workspace, albeit its small real estate and user's viewing capacity, even with negative zoom level, it is still incredibly usable, I dare say better than SageMaker. I am looking forward to further lessons.
Finished the first half of Lesson 2 from the importance of machine learning, historical context, the data science pipeline, data types, and the two perpectives on Machine Learning.
In my opinion, I think we started data science way before the buzzword was first used. Through statistical models that carried out census to visualizations that are hand-drawn. It was until the power of hardware today plus the advancements in programming languages that find a common ground between computer scientists and statisticians to date that data science becomes applicable, and for good reason.
Quick side note, I am also trying to brush up my skills in Linux by taking up Introduction to Linux (LFS101x) from edX in collaboration with the Linux Foundation. This is in preparation to the Essentials of Linux System Administration that I plan to take later this month. You can read more about my experience here.
I am excited to announce that I recently finished the Intel Edge AI for IoT Developers Nanodegree from Udacity. Read my experience here.
This is also my first time setting up and writing my blog. This blog is meant for sharing stuff that I currently explore, be it in the face of technology, books, movies, games, online courses or anything that interests me, posting them from time to time. Aside from these, I also think of this blog as my motivation towards completing something. For me to have a "written" record of what I am exploring.