Skip to main content

πŸ’¬Work in Progress
  • This is a work in progress. Please check back later.
  • Usually, this is because I am translating this post to other languages.
  • Try looking for this post in other languages, if you are multilingual.

After a few years of technical writing, I felt limitations on writing platforms that hindered me from writing the best-class articles. Technological knowledge is dynamic and intertwined in that none of the current formats – academic papers, lecture videos, code examples, or straightforward posts – can best represent the knowledge. I have examined and observed some attempts that addressed this issue, namely, stuff called the second brain or digital gardens, but none of them seemed to correctly solve the problem. Therefore, I have distilled my inconveniences into this huge mega-post and imagined what I would've done if I had created the new incarnations of digital brains.

Update 2022/06/12

Since this post, I have extensively studied non-linear PKM software, such as Roam, Obsidian, Logseq, and Foam. I acknowledge that I misunderstood the concept of manual linking; that PKM software performs a fuzzy search to intelligently identify linked and unlinked references. I found some PKM software with automatic linkings, such as Saga or Weavit. But none of them worked how I expected. Manual linking helps refine the database. So, even if I make a Next-gen digital brain, I will not remove the linking process.

TL;DR​

  • Create an aesthetic-interactive-automatic pile of code-image-repo-text that organizes-presents-pitches itself.
  • There is no manual tagging, linking, or image processing, etc., etc.
  • You just throw a random knowledge; creating a knowledge mesh network.
  • The algorithm operates everything. It will be contained, processed, organized, and distributed all around the world in different languages.
  • You don't tend knowledge. The algorithm penalizes outdated content (you can mark the post as evergreen to avoid this.)

So what's the issue?

  • Apart from popular belief, I noticed the best method for managing a digital garden is not tending it. Instead, try to make a digital jungle – you don't take care of it; nature will automatically raise it.
  • In other words, the digital brain should make as less friction as possible.
  • The less you tend, the more you write.

Especially,​

  • I despise the [[keyword]] pattern prevalent in so-called second brains (obsidian, dendron, ...).
  • Not to mention it performs poorly for non-alphabetical documents,
  • It is manual – creates a lot of friction.
  • The fact that you must explicitly wrap them with brackets doesn't make sense... What if you realize you want to make a linkage to a term you've been writing for 200 posts?
  • Do you go back and link them all one by one?
  • No! The solution must lie in algorithmic keyword extraction.

#1 Organizing Contents

Interconnected entities​

  • Practical knowledge does not exist in simple posts (though they might be straightforward). Create a knowledge bundle that interconnects GitHub Repository, Codes, GitHub README, and other posts in the same brain network.
  • Examine how Victor's post has rich metadata for the paper, dataset, demo, and post. This is what I see as interconnected entities.

Interactive Contents & Animations​

Victor Dibia. Seems like using MDX.

μ•„λΉ λŠ” 개발자. Confirmed using MDX.

pomb.us. Reacts to user scroll.

qubit.donghwi.dev. This isn't a blog; it's a webapp that demonstrates key concepts of Quantum Computers. But still interesting.

Unorganized System. Instead, Automatic Graphing.​

  • Trust me, manually fiddling with tag sucks.

  • Necessarily tagging posts and organizing posts into subdirectories resembles organizing your computer.

  • However, you wouldn't want to do this if you have thousands of posts; also the border gets loose. What if the post has two properties? What becomes the primary tag and what becomes the secondary tag?

  • Students who grew up with search engines might change STEM education forever

  • Notable trends. Gen Z's don't organize folders anymore!

  • Recent trends, I would say, are dumping everything into a mega folder and searching up things whenever needed.

  • I also used to organize folders a lot more, but recently as searches like Spotlight and Alfred improve, I don't see the need to manage them all by hand, considering I always pull up those search commands to open a file.

  • You don't need to manually organize all of the files when algorithms can read all the texts and organize them for you!

  • Use algorithmic inspections to analyze how the posts may interrelate with each other properly.

Velog, the Korean version of dev.to, links relevant posts for every post.

Example of backlinking from Dendron

  • I agree with the importance of interlinking knowledge crumbles, but I can't entirely agree with the method they are taking.
  • Manually linking posts are inconsistent and troublesome; it can only be done on a massive communal scale, like Wikipedia.
  • You cannot apply the same logic to individual digital brain systems.

#2 SEO and Open Graphs

Precis Bots for Meta description​

  • I can apply the above technique for crosslinking to TL;DR bots for meta tag descriptions.

Automatic Open Graph Image Insertion​

  • For example, GitHub creates automatic open graph images with their metadata.

Example open graph image from GitHub

  • There are quite some services using this technique.
  • GitHub wrote an awesome post on how to implement this feature.
  • I also tried to implement this on top of Ghost CMS, which, I gave up after figuring out the Ghost Core Engine should support this. However, I have created a fork that I can extend later on. http://og-image.cho.sh/

GitHub - anaclumos/cho-sh-og-image: Open Graph Image as a Service - generate cards for Twitter, Facebook, Slack, etc

#3 Multilanguage

Proper multilanguage support​

  • Automatic Langauge Detection. The baseline is to reduce the workload, that I write random things, and the algorithm will automatically organize corresponding data.
  • hreflang tags and HTTP content negotiations. I found none of the services which use this trick properly (outside of megacorporate i18n products)

Translations​

  • At this point, I might just go write one English post and let Google Translate do the heavy lifting.
  • Also, I can get contributions from GitHub.

While supporting multilanguage and translations, I want to put some 3D WebGL globe graphics. Remember infrastructure.aws in 2019? It used to show an awesome 3D graphic of AWS's global network.

I kinda want this back too. Meanwhile, this looks nice:

Also made some contributions...

Fonts and Emoji​

  • I want to go with the standard SF Pro series with a powerful new font Pretendard.

    font-family: ui-sans-serif, -apple-system, BlinkMacSystemFont, 'Apple SD Gothic Neo', Pretendard, system-ui -system-ui, sans-serif, 'Apple Color Emoji';

  • However, I am exploring other options.

  • I liked TossFace's bold attempt to infuse Korean values into the Japan-based emoji system for emoji. (lol, but they canceled it.)

#4 Domains and Routes

URL Structures​

  • Does URL Structure matter for SEO? I don't really think so if the exhaustive domain list is provided through sitemap.xml.
  • For SEO purposes, (although I still doubt the effectiveness) automatically inserting the URLified titles at the end might help (like Notion)

Nameless routes​

  • I really don't like naming routes like cho.sh/blog/how-to-make-apple-music-clone. What if I need to update the title and want to update the URL Structure?
  • Changing URL structure affects SEO, so to maintain the SEO I would need to stick to the original domain even after changing the entity title. But then the title and URL would be inconsistent.
  • Therefore, I would give the entity a UID that would be a hash for each interconnected entity. Maybe the randomized hash UID could be a color hex that could be the theme color for the entity?
  • Emoji routes seem cool, aye? Would also need Web Share API for this, since Chrome doesn't support copying Unicode URLs.
  • Some candidates I am thinking of:
  • cho.sh/β™₯/e5732f/ko
  • cho.sh/🧠/e5732f/en
  • Also found that Twitter doesn't support Unicode URLs.

#5 Miscellany

Headline for Outdated Posts​

  • There should be a method to penalize old posts; they should exist in the database, but wouldn't appear as much on the data chain. i.e., put a lifespan or "valid until" for posts.

홍민희 λΈ”λ‘œκ·ΈKat Huang

Footnotes​

  • A nice addition. But not necessary.
  • If I ever have to make a footnote system, I want to make it hoverable, which namu.wiki did a great job. I do not want to make it jump down to the bottom and put a cringy ↩️ icon to link back.

ToC​

  • A nice addition. But not necessary.

Comments​

  • Will go with Giscus.

This also looks cool for MD/MDX

  • Imagine there is a function bool doesItHalt({function f, input i}) that returns if the parameter function f(i) halts or not.
  • Now consider the following functions:
pair duplicator(input i) {
return {i, i}
}

bool invertHalt(bool b) {
if(b) {
while(true); // hangs forever
return 0;
} else {
return 0;
}
}
  • Essentially, if f(i) halts, the invertHalt will hang (i.e., wouldn't halt), and if f(i) hangs, the invertHalt will halt.
  • Let us consider the composition of the two functions:
bool unknown(input i) {
auto a = duplicator(i) // a = {i, i}
auto b = doesItHalt(a) // does i(i) halt?
auto c = invertHalt(b) // hangs if i(i) halts and vice versa.
}
  • Will unknown(unknown) halt? What should doesItHalt({unknown, unknown}) return?
  • Let us suppose it will return true. Then, it means that doesItHalt({unknown, unknown}) Β returned false because invertHalt(b) would've hung otherwise. Therefore, this contradicts our supposition that doesItHalt({unknown, unknown}) returns true.
  • Let us suppose it will return false. Then, it means that doesItHalt({unknown, unknown}) would return true because invertHalt wouldn't hang otherwise. Therefore, this contradicts our supposition that doesItHalt({unknown, unknown}) returns false.
  • Therefore, unknown cannot hang nor halt; therefore, no such doesItHalt can exist.

Path for Karabiner Advanced Settings​

~/.config/karabiner/assets/complex_modifications/keyboard.json

keyboard.json

{
"title": "Caps Lock β†’ Hyper Key (control+shift+option) (F16 if alone)",
"rules": [
{
"description": "Caps Lock β†’ Hyper Key (control+shift+option) (F16 if alone)",
"manipulators": [
{
"from": {
"key_code": "caps_lock"
},
"to": [
{
"key_code": "left_shift",
"modifiers": ["left_control", "left_option"]
}
],
"to_if_alone": [
{
"key_code": "f16"
}
],
"type": "basic"
}
]
}
]
}

hyper.json

{
"title": "Hyper Key Combinations",
"rules": [
{
"description": "Use Hyper + D to F13",
"manipulators": [
{
"type": "basic",
"from": {
"key_code": "d",
"modifiers": {
"mandatory": ["left_shift", "left_control"]
}
},
"to": [
{
"key_code": "f13"
}
]
}
]
},
{
"description": "Use Hyper + E to control + up_arrow",
"manipulators": [
{
"type": "basic",
"from": {
"key_code": "e",
"modifiers": {
"mandatory": ["left_shift", "left_control"]
}
},
"to": [
{
"key_code": "up_arrow",
"modifiers": ["left_control"]
}
]
}
]
}
]
}

keyboard.json

{
"title": "Multilingual Input Methods",
"rules": [
{
"description": "R Command to Gureum Han2",
"manipulators": [
{
"type": "basic",
"from": {
"key_code": "right_command",
"modifiers": {
"optional": ["any"]
}
},
"to": [
{
"key_code": "right_command",
"lazy": true
}
],
"to_if_alone": [
{
"select_input_source": {
"input_source_id": "org.youknowone.inputmethod.Gureum.han2"
}
}
]
}
]
},
{
"description": "L Command to Gureum Roman",
"manipulators": [
{
"type": "basic",
"from": {
"key_code": "left_command",
"modifiers": {
"optional": ["any"]
}
},
"to": [
{
"key_code": "left_command",
"lazy": true
}
],
"to_if_alone": [
{
"select_input_source": {
"input_source_id": "org.youknowone.inputmethod.Gureum.system"
}
}
]
}
]
}
]
}

language.json Then I configured a bunch of shortcuts to fly through my Mac. Remember βŒƒβŒ₯⇧ is the so-called Hyper Key that I made, which uses the Caps Lock key or ν•œ/영 ν‚€ (Korean-English Key). That is, because, I never use the Caps Lock key (I use shift) and I click the right command key to type Korean and click the left command key to type English, inspired by the Japanese Apple keyboard's Kana (かγͺ) and English Key (θ‹±ζ•°).

Rectangle.app Keyboard Maestro.app gureum.app

P: Poly-time Solvable​

  • Class of solvable and verifiable problems in polynomial time by a deterministic Turing machine.

NP: Nondeterministic Polynomial Time​

  • Class of problems that are not sure if it's solvable in polynomial time but verifiable in polynomial time.
  • To prove that a problem is in NP, we need an efficient certification: a certificate (a potential solution to the problem) and a certifier (a way to verify the answer in polynomial time).

NP-Hard: Nondeterministic Polynomial Time-Hard​

  • It means "at least as hard as the hardest problems in NP."
  • Not sure if it's solvable in poly-time.
  • Not sure if it's verifiable in poly-time.
  • To prove that a problem is NP-hard, we need to show that it is poly-time reducible to another NP-hard problem. That is, reduce another NP-hard problem in it.

NP-Complete​

Both NP and NP-Hard.

One way that doesn't work: Using environment variables​

  • If you click the app name from the top bar in Xcode, you can edit scheme.

  • You can try settings values at Run β†’ Arguments β†’ Environment Variables and access them through ProcessInfo.processInfo.environment["KEY"].

One possible but unsafe way: xcconfig​

  • Create .xcconfig and add them to app build settings.
  • Is it safe? No!

Another possible buy unsafe way: .gitignore​

  • I just made a .gitignore that ignores all *Credentials.swift file.
  • Is it safe? No!
  • However, I am using LinkedIn API that makes a network request.
  • Anyone who will take the effort to decompile the app and extract the API key data will attack the network request and extract the key.
  • I concluded security beyond not disclosing them through the source control system is meaningless for my use case.

One possible and safe way: Secure Enclaves.​

Another possible (and probably the correct) way​

  • Just don't store that level of sensitive information on the client.

Another another possible way that might be worth exploring​

Advanced Readings​

I recently saw this Gist and Interactive Page, so I thought it would be cool to update it for the 2020s. This can serve as a visualization of how fast a modern computer is.

How to read this calendar​

Imagine 1 CPU cycle took 1 second. Compared to that, Apple's M1 chip has a CPU cycle of 0.25 ns approx. That's 4,000,000,000 times difference. Now, imagine how M1 would feel one second in real life.

ActionPhysical TimeM1 Time
1 CPU Cycle0.25ns1 second
L1 cache reference1ns4 seconds
Branch mispredict3ns12 seconds
L2 cache reference4ns16 seconds
Mutex lock17ns68 seconds
Send 2KB44ns2.93 minutes
Main memory reference100ns6.67 minutes
Compress 1KB2ΞΌs2.22 hours
Read 1MB from memory3ΞΌs3.33 hours
SSD random read16ΞΌs17.78 hours
Read 1MB from SSD49ΞΌs2.27 days
Round trip in the same data center500ΞΌs23.15 days
Read 1MB from the disk825ΞΌs38.20 days
Disk seek2ms92.60 days
Packet roundtrip from California to Seoul200ms25.35 years
OS virtualization reboot5s633 years
SCSI command timeout30s3,802 years
Hardware virtualization reboot40s5,070 years
Physical system reboot5m38,026 years

When you have...​

Permissions 0644 for '~/.ssh/key.pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.

Quick Fix​

  • Command this for individual keys

    sudo chmod 600 ~/.ssh/key.pem

  • Command this for the SSH Key folder

    sudo chmod 700 ~/.ssh

So what are these random digits?​

  • Each digit represents the access privilege of User, Group, and Other.

    7: 4(r) + 2(w) + 1(x) rwx read, write and execute 6: 4(r) + 2(w) rw- read and write 5: 4(r) + 1(x) r-x read and execute 4: 4(r) r-- read only 3: 2(w) + 1(x) -wx write and execute 2: 2(w) -w- write only 1: 1(x) --x execute only 0: 0 --- none

  • Therefore, chmod 600 means giving read and write access to the user and nothing to any other parties.

  • Giving 755 means giving full access to the user and read, execute access to any other parties.

  • Giving 777 🎰 means giving full access to everyone.

Note that Linux SSH manual says:

  • ~/.ssh/: This directory is the default location for all user-specific configuration and authentication information. There is no general requirement to keep the entire contents of this directory secret, but the recommended permissions are read/write/execute for the user and not accessible by others. (Recommends 700)
  • ~/.ssh/id_rsa: Contains the private key for authentication. These files contain sensitive data and should be readable by the user but not accessible by others (read/write/execute). ssh will simply ignore a private key file if it is accessible by others. It is possible to specify a passphrase when generating the key, which will be used to encrypt the sensitive part of this file using 3DES. (Recommends 600)

Disclaimer​

  • Both the United States and the Republic of Korea allow limited usage of copyrighted material for educational use.

Notwithstanding the provisions of sections 17 U.S.C. Β§ 106 and 17 U.S.C. Β§ 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright.

  • For most countries, downloading recorded videos for educational purposes only, for such use where you might want to watch them under an unstable internet connection, would not entail legal troubles.
  • However, you understand that you use this code block at your own risk and would not use it in such a way that will jeopardize academic integrity or infringe any intellectual rights.

Usage​

// This code is under MIT License.

let video = document.querySelector('video').src
let download = document.createElement('a')
let button = document.createElement('button')
button.innerText = 'To Download Video: Right Click Here β†’ Save Link As'
download.append(button)
download.href = video
download.setAttribute('download', video)

document.getElementsByClassName('transcript')[0].prepend(download)
  • Access the Zoom video recording page.
  • After the webpage completes loadingβ€”when you can both play the video and scroll through the chat listβ€”open the browser console.
  • Paste this code and close the console.
  • There will be a random button on top of the chat list. Don't click it; right-click it and select Save Link As.
  • Now the video will download.

The backstory of reporting this to Zoom​

In March of 2021, I have reported this to Zoom as I considered this a security matter. While anyone can technically record their screen to obtain a copy of the video, I thought the implications were different: when you can one-click to download the full video, and when it takes hours of effort to record the video and audio manually.

Furthermore, instructors can decide if they want to open up downloading the original copies. Therefore, this feature's whole purpose is to provide inconvenience to deter users from downloading files. In that sense, this code is a security bypass of that policy.

That's what I told Zoom HQ. They responded:

Thank you for your report. We have reproduced the behavior you have reported. However, while this UI does not expose the download URL for recordings which have opted to disable the download functionality, a user may still record the meeting locally using a screen-recording program. In addition, for the browser to be able to play the recording, it must be transmitted to the browser in some form, which an attacker may save during transmission, and so the prevention of this is non-trivial. We appreciate your suggestion and may look into making this change in the future, but at the moment, we consider this to be a Defense-In-Depth measure. With every fix, we must carefully weigh the usability tradeoffs of any additional security control. We are reasonably satisfied with our security at this time, and we have chosen not to make any changes to our platform for the time being. We will be closing this report, but we still want to thank you for all your effort in bringing this behavior to our attention. Thank you for thinking of Zoom security.

Well... It seems like they're not interested, and no patch will come soon. So, for the time being, use this code wisely, and abide by your laws!

Prerequisite​

Final Goal​

  • Press Left Command to set Mac's input method to English.
  • Press Right Command to set Mac's input method to Korean.
  • Any other shortcut combinations would perform as usual.

Instructions​

  • Go to ~/.config/karabiner/assets/complex_modifications. You can click Command+Shift+G within Finder to open a goto window.
  • Create a JSON file like the following here (open any text editor and save it as filename.json).
  • Go to Karabiner-Elements.app β†’ Complex Modifications and press add rules.
  • Click the rules that you want to enable. The above text file will show under Multilingual Input Methods.
{
"title": "Multilingual Input Methods",
"rules": [
{
"description": "R Command to ν•œκΈ€",
"manipulators": [
{
"type": "basic",
"from": {
"key_code": "right_command",
"modifiers": { "optional": ["any"] }
},
"to": [{ "key_code": "right_command", "lazy": true }],
"to_if_alone": [{ "select_input_source": { "language": "^ko$" } }]
}
]
},
{
"description": "L Command to English",
"manipulators": [
{
"type": "basic",
"from": {
"key_code": "left_command",
"modifiers": { "optional": ["any"] }
},
"to": [{ "key_code": "left_command", "lazy": true }],
"to_if_alone": [{ "select_input_source": { "language": "^en$" } }]
}
]
}
]
}

Configuring more languages​

Update Mar 7th, 2022​

Expensive Jobs?​

Notably:

  • Doctors πŸ§‘β€βš•οΈ
  • Lawyers πŸ§‘β€βš–οΈ

Why are expensive jobs expensive πŸ’°?​

Usually, an expensive labor value takes place when the demand for the work is very high, whereas the supply cannot increase. Moreover, health and legal issues regularly appear throughout our society (either injured or in a legal dispute), and it is doubtful for an individual to not benefit from either. In other words, these demands never vanish.

However, the supply always stagnates. Why?

  • Supply velocity remains slow. Because:
    • Skillful doctors and lawyers need extended training.
    • These "prestigious" institutions can only output dozens of jobs per year.
  • But we cannot pump out more supply. Why?
    • We lack secondary sources for such collection (i.e., people who can study for ten years are not abundant).
    • A forceful increase will entail societal backlash (i.e., unprofessional workers)
  • Such scarcely created suppliers work very slow
    • How many patients can a doctor see in a day?
      • There are complaints about Factory-style medical facilities, like conveyor belts.
    • How long does it take for a single lawsuit to resolve?

In the end, supply always falls behind demand.

Then why can AI replace expensive jobs?​

Two aspects β‘  Economic Efficiency β‘‘ Performance.

Economic Efficiency​

Making AI is expensive because:

  1. High-quality AI requires a high-purity dataset.
  2. For such high purity data, you need a lot of patternable data.

Making an AI is also tricky regardless of the field. To slightly exaggerate, creating a cleaning AI is as hard as making a medical AI.

  • To create a perfect cleaning AI...
    • To determine the pollution level of a room, you will need
      • millions of room photos and data matching the pollution level.
      • millions of datasets containing each type of pollution and its corresponding solutions are required.
        • Water spill. β†’ Dishcloth
        • Garbage β†’ Trash can
        • Dust β†’ Vacuum
    • Each methodology needs to be thoroughly trained.
      • Analyze and train millions of behaviors cleaning with a dishcloth
      • Analyze and train millions of behaviors that contain and dispose of garbage
      • Analyze and train millions of behaviors that utilize vacuum cleaners very well

As a result, cleaning artificial intelligence also costs a lot of money. In other words,Β if it will be challenging to produce artificial intelligence anyways, you want a model that brings sufficient economic effects and versatile adaptability. Therefore, it is appropriate to train artificial intelligence for expensive labor to show this high financial return on investment level.

Performance​

On the other hand, AI never forgets, and it can duplicate itself. Imagine:

  • A doctor who never fails his medical knowledge. A lawyer who remembers every case perfectly.
  • Cloning the best doctors and lawyers in class into thousands of AI, taking thousands of clients at once.
  • Instantly sharing newly discovered data.
  • Remembering every detail of the client and proactively preventing accidents.
  • Meeting my family doctor whenever, wherever.

Industry Resistance​

SEOUL (Reuters) - South Korea's parliament on late Friday passed a controversial bill to limit ride-hailing service Tada, dealing a blow to a company that has been a smash hit since its launch in late 2018 but faced a backlash from taxi drivers angry over new mobility services.Β -Β South Korea passes bill limiting Softbank-backed ride-hailing service Tada | Reuters

Recent TADA Warfare exhibited a classic Alliance-versus-Megacorporation style of conflict. Taxi drivers eventually won, but it was a victory without victory --- since the winner was another conglomerate Kakao Mobility which finally took over the market.

Physicians and lawyers also show strong industry resistance. However, they also possess immense social power; one can easily imagine such scenarios:

Scenarios​

  • Medical AIΒ killsΒ its patient! Can we bet our lives on such a lacking machine?
    • Regardless of the context, such social fear can lead to Tech Luddites.
  • Lawyer AI deemed discriminating? Can we let such biased agents take over our nation?
    • The bias of precedents can appear depending on how statistics are captured. If you maliciously capture statistics and frame specific vested interests as biased, it can spread to artificial intelligence distrust and rejection movements regardless of the context.

Potential Strategy​

🐡

In the animal kingdom, there was a naive monkey. One day, a badger came and presented colorful sneakers to a monkey. The monkey didn't need shoes but received them as a gift. After that, badgers continued offering sneakers, and the callus on the monkey's feet gradually thinned. Soon, the monkey, unable to go out without shoes, became dependent on the badger.

Start with a platform system that helps doctors and lawyers.

  • DOCTORS: Start with Medical CRM. When a patient comes, information about the patient is collected before treatment begins. When treatment begins, the patient's story is automatically parsed, and artificial intelligence extracts keywords. Medical personnel verifies this. Similar cases and recommended care/prescriptions appear on one side of the screen. The doctor selects the appropriate treatment among the recommended treatments and proceeds with the treatment. Or a doctor can add a new therapy. This information is recorded on the server and used for extensive data training.
  • LAWYERS: Start with a case analyzer. It begins with a local legal case (e.g., traffic ticket violation), likeΒ DoNotPay - The World's First Robot Lawyer. However, after gradually increasing the number of issues that become databases, lawyers can search for similar topics like "Google Search." For example, if a fraud case comes in, the lawyer enters the details of the case. With dozens of previous precedents, artificial intelligence analyzes the similarities and differences of events.
  • Like GitHub Copilot but for medical and legal cases.

Like these, provide sneakers --- very essential and valuable tools for medical personnel and legal professionals. In other words, transform doctors and lawyers into our primary customers and data pipeline. When entering a robust market like the medical and legal circles, never engage in an all-out war. Instead, build cooperative relationships first, neutralize them, and then wage a full-scale war.

πŸ’¬Work in Progress
  • This is a work in progress. Please check back later.
  • Usually, this is because I am translating this post to other languages.
  • Try looking for this post in other languages, if you are multilingual.
πŸ“œOld Post Ahead!
  • I wrote this post more than 1 year ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

πŸ’¬Work in Progress
  • This is a work in progress. Please check back later.
  • Usually, this is because I am translating this post to other languages.
  • Try looking for this post in other languages, if you are multilingual.
πŸ“œOld Post Ahead!
  • I wrote this post more than 1 year ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

πŸ’¬Work in Progress
  • This is a work in progress. Please check back later.
  • Usually, this is because I am translating this post to other languages.
  • Try looking for this post in other languages, if you are multilingual.
πŸ“œOld Post Ahead!
  • I wrote this post more than 1 year ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

πŸ’¬Work in Progress
  • This is a work in progress. Please check back later.
  • Usually, this is because I am translating this post to other languages.
  • Try looking for this post in other languages, if you are multilingual.
πŸ“œOld Post Ahead!
  • I wrote this post more than 1 year ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

TLDR​

  • If you have this error, double-check if your rootDir is consistent.
  • I got this error from TSC, auto-flattening the folder structure.

On my TypeScript Node Server, I suddenly got the following error on the tsc command for production settings.

internal/modules/cjs/loader.js:{number}
throw err;

Error: Cannot find module '{project}/dist'
at ... {
code: 'MODULE_NOT_FOUND',
requireStack: []
}

Then I stashed my works and started traveling back in time with git checkout HASH. Comes out, the error started when I added MongoDB Models at src/models.

It seemed strange since it had nothing to do with adding new modules or dependencies. Reinstalling node_modules did not do the job for me (Relevant Stack Overflow Question here). Please take a look at my folder structure.

.
β”œβ”€β”€ LICENSE
β”œβ”€β”€ README.md
β”œβ”€β”€ dist
β”œβ”€β”€ package-lock.json
β”œβ”€β”€ package.json
β”œβ”€β”€ src
β”‚ β”œβ”€β”€ models (Newly added. Started to cause error.)
β”‚ β”‚ └── user.ts (Newly added. Started to cause error.)
β”‚ └── server
β”‚ β”œβ”€β”€ config
β”‚ β”‚ β”œβ”€β”€ config.ts
β”‚ β”‚ β”œβ”€β”€ dev.env
β”‚ β”‚ β”œβ”€β”€ dev.env.sample
β”‚ β”‚ β”œβ”€β”€ prod.env
β”‚ β”‚ └── prod.env.sample
β”‚ └── index.ts
└── tsconfig.json

Long story short, it was the problem in my tsconfig. I have previously declared the following statement on my tsconfig.

{
...
"include": ["src/**/*"]
}

However, since there was only /server folder before creating the model, it seems that TSC has automatically set the root directory to src/server. Therefore the dist output seemed like the following.

dist
β”œβ”€β”€ config
β”‚ β”œβ”€β”€ config.js
β”‚ └── prod.env
└── index.js

But after models/user.ts was added, src contained both models and server directories, recognizing the root directory as src. So it now became:

dist
β”œβ”€β”€ models
β”‚ └── user.js
└── server
β”œβ”€β”€ config
β”‚ β”œβ”€β”€ config.js
β”‚ └── prod.env
└── index.js

Notice the directory structure has changed. My entire npm commands were based as if src/server was a root directory (as if the index was at dist/index.js), so that began to cause the error. Therefore I updated the npm commands. Note that I changed dists to dist/servers.

rm -rf dist
&& tsc
- && cp ./src/server/config/prod.env ./dist/config/prod.env
&& export NODE_ENV=prod
- && node dist

rm -rf dist
&& tsc
+ && cp ./src/server/config/prod.env ./dist/server/config/prod.env
&& export NODE_ENV=prod
+ && node dist/server

To prevent TSC from guessing the root directory, you can add the following line on your tsconfig.json.

{
"compilerOptions": {
...
"rootDir": "src",
}
}

This line will retain the absolute folder structure from src.

πŸ“œOld Post Ahead!
  • I wrote this post more than 1 year ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

πŸ’¬Work in Progress
  • This is a work in progress. Please check back later.
  • Usually, this is because I am translating this post to other languages.
  • Try looking for this post in other languages, if you are multilingual.
πŸ“œOld Post Ahead!
  • I wrote this post more than 1 year ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

πŸ’¬Work in Progress
  • This is a work in progress. Please check back later.
  • Usually, this is because I am translating this post to other languages.
  • Try looking for this post in other languages, if you are multilingual.

Let's create a calendar with JavaScript but without any external library. This project is based on my previous internship at Woowa Bros, a unicorn food-delivery startup in Seoul.

Show me the code first.​

GitHub - anaclumos/calendar.js: Vanilla JS Calendar

Show me the demo first.​

Goals​

  • Use functional programming* instead of Object-oriented programming.
  • No DOM manipulation after initializing. This philosophy is based on the React framework (or any other Single Page Application libraries.) DOM manipulation can be highly confusing if 30 different codes are trying to edit the same thing. So instead, we will rerender the components if we need to edit something.

πŸ’‘

Don't fix it. Buy a new one. β€” Rerendering in Front-end

Stack​

  • JavaScript Date Object
  • CSS display: grid will be useful.

Basic Idea​

  • There will be a global displayDate object that represents the displaying month.
  • navigator.js will change this displayDate object, and trigger renderCalendar() function with displayDate as an argument.
  • renderCalendar() will rerender with the calendar.

Before anything, prettier!​

Prettier helps write clean and neat codes with automatic formatting.

{
"semi": false,
"singleQuote": true,
"arrowParens": "always",
"tabWidth": 2,
"useTabs": false,
"printWidth": 60,
"trailingComma": "es5",
"endOfLine": "lf",
"bracketSpacing": true
}

.prettierrc

Now throw in some HTML.​

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0"
/>
<title>JavaScript Calendar</title>
</head>
<body>
<div id="navigator"></div>
<div id="calendar"></div>
</body>
<script>
// code for rendering
</script>
</html>

index.html I generated this boilerplate with VS Code.

Then trick VS Code to read JS String as HTML Tags.​

Since we use Vanilla JavaScript, we don't have access to fancy JSX-style highlighting. Instead, our generated HTML codes will live inside JavaScript String, which doesn't have syntax highlighting or Intellisense. Therefore, let's create a function that tricks VS Code to recognize JavaScript String as HTML Tags.

const html = (s, ...args) => s.map((ss, i) => `${ss}${args[i] || ''}`).join('');

util.js to be added – screenshot of highlighting

calendar.js​

Then we connect calendar.js and index.html.

<script src="calendar.js"></script>

index.html Defining constants will help before writing renderCalendar().

const NUMBER_OF_DAYS_IN_WEEK = 7
const NAME_OF_DAYS = [
'sun',
'mon',
'tue',
'wed',
'thu',
'fri',
'sat',
]
const LONG_NAME_OF_DAYS = [
'Sunday',
'Monday',
'Tuesday',
'Wednesday',
'Thursday',
'Friday',
'Saturday',
]
const ACTUAL_TODAY = new Date()

calendar.js Note that we use NUMBER_OF_DAYS_IN_WEEK to remove magic numbers inside our code. It can be tough to decipher if we meet a random 7 during a code. Instead, using such constant increases the maintainability of the code.

for (let d = 0; d < NUMBER_OF_DAYS_IN_WEEK; d++) {
// do something
}

If there was a random 7, who knows if we are iterating through the number of Harry Potter Books? This code block will be the baseline for our calendar generation. We will pass in the HTML target and day object. today represents the month being displayed. Thetoday object will come from navigator,js. Navigator will return the actual date for the current month and return on the first day of the month for other months.

const renderCalendar = ($target, today) => {
let html = getCalendarHTML(today)
// minify html
html = html.replace(/\n/g, '')
// replace multiple spaces with single space
html = html.replace(/\s{2,}/g, ' ')
$target.innerHTML = html
}

calendar.js Now, we need four different Date objects for displaying the calendar. We could've used fewer objects, but it is up to the implementation. I think reducing date objects here would cause a minimal performance increase but spike the understandability of the code, so using four objects seems like a fair middle ground.

Four Date objects we need​

  • The last day of last month: needed to highlight last month's weekend and display the correct date for last month's row.
  • The first day of this month: needed to highlight this month's weekend and figure out how many days of last month we need to render.
  • The last day of this month: needed for rendering this month with iteration.
  • The first day of next month: needed to highlight the weekend of next month.

I made a function that would process these four dates when inputted a specific Date.

const processDate = (day) => {
const month = day.getMonth()
const year = day.getFullYear()
return {
lastMonthLastDate: new Date(year, month, 0),
thisMonthFirstDate: new Date(year, month, 1),
thisMonthLastDate: new Date(year, month + 1, 0),
nextMonthFirstDate: new Date(year, month + 1, 1),
}
}

calendar.js

πŸ“œOld Post Ahead!
  • I wrote this post more than 1 year ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

Recently I came across The Noun Project's API. With the combination of the download function I created in the past, you could download hundreds of icons within seconds.

Beware​

Do not use this tool to pirate others' intellectual property. Beware of what you are doing with this code and The Noun Project's API. Read the license and API documents thoroughly. Unauthorized use cases are listed here. This entire post & codes are MIT licensed.

Importing libraries​

import requests
import os
from tqdm import tqdm
from requests_oauthlib import OAuth1

You will need to pip3 download if you do not have these libraries.

The download function​

def download(url, pathname):
if not os.path.isdir(pathname):
os.makedirs(pathname)
response = requests.get(url, stream=True)
file_size = int(response.headers.get("Content-Length", 0))
filename = os.path.join(pathname, url.split("/")[-1])
if filename.find("?") > 0:
filename = filename.split("?")[0]
progress = tqdm(
response.iter_content(256),
f"Downloading {filename}",
total=file_size,
unit="B",
unit_scale=True,
unit_divisor=1024,
)
with open(filename, "wb") as f:
for data in progress:
f.write(data)
progress.update(len(data))

This code fetches the URL and saves it as a file at pathname.

The Noun Project API​

# ---

DOWNLOAD_ITERATION = 3
# Returns 50 icons per iteration.
# Three iteration equals 150 icons.

SEARCH_KEY = "tree" # Search Term
SAVE_LOCATION = "./icons"
auth = OAuth1("API_KEY", "API_SECRET")

# ---

for iteration in range(DOWNLOAD_ITERATION):
endpoint = (
"http://api.thenounproject.com/icons/"
+ SEARCH_KEY
+ "?offset="
+ str(iteration * 50)
)
response = requests.get(endpoint, auth=auth).json()
for icon in response["icons"]:
download(icon["preview_url"], SAVE_LOCATION)

For more advanced uses, please visit this docs page. In addition, you can get your API Key and API secret by registering your app here.

Result​

I have run some benchmarks and found that downloading ~5k icons shouldn't be a problem. However, The Noun Project's API has a call limit so beware of that.

πŸ“œOld Post Ahead!
  • I wrote this post more than 1 year ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

Primary Objectives​

  • Implement the Karatsuba Method
  • Do not use any * operator (like β€” not at all!)

First, let's import the math library.

import math

Let's add some util functions for adding zeros. The following operation is super-expensive, and I did this for the sake of removing *s.

def addZeros(number: int, zeros: int) -> int:
s = str(number)
for _ in range(zeros):
s += "0"
return int(s)

If you do not care about not using *s, you can go with:

def addZeros(number: int, zeros: int) -> int:
return number * (10 ** zeros)

Let's say the standard input provides the value in string, with , in between the two numbers. I wrote a wrapper class that parses the standard input and feeds the value into the core method.

def karatsuba(input: str) -> str:
inputList = list(map(str.strip, input.split(',')))
return str(karatsubaCore(int(inputList[0]), int(inputList[1])))

Then we need to finish the actual calculation. For the base calculation (the line after if min(v1, v2) <= 100:) you could go with v1 * v2 if you don't need to remove *s.

def karatsubaCore(v1: int, v2: int) -> int:
if min(v1, v2) <= 100:
minv = min(v1, v2)
maxv = max(v1, v2)
ans = 0
for _ in range(minv):
ans += maxv
return ans

else:
n = int(math.log10(max(v1, v2))//2)
a = int(v1 // pow(10, n))
b = int(v1 % pow(10, n))
c = int(v2 // pow(10, n))
d = int(v2 % pow(10, n))

val1 = karatsubaCore(a, c)
val2 = karatsubaCore(b, d)
val3 = karatsubaCore(a+b, c+d) - val1 - val2

return addZeros(val1, n+n) + addZeros(val3, n) + val2

It is always a good idea to have some validation. Unfortunately, I did not use any testing library; this short script will suffice the purpose of validating the answer.

def karatCheck(input: str) -> str:
i = list(map(str.strip, input.split(',')))

# my calculation
karat: int = karatsubaCore(int(i[0]), int(i[1]))

# the correct calculation
correct: int = int(i[0]) * int(i[1])

print("Correct!" if karat == correct else "Itz... Wrong...")


karatCheck("342345,123943")
karatCheck("342345,0")
karatCheck("00342345 , 123943129893493")
karatCheck("12030912342345,1239431000192837812")
karatCheck("2,1239431000192837812")
karatCheck("249302570293475092384,0")
karatCheck(" 100, 100 ")

If you run this, you will get:

Correct!
Correct!
Correct!
πŸ“œOld Post Ahead!
  • I wrote this post more than 1 year ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

Recently I came across the idea of publishing a React App on GitHub Pages. I can distribute my React App using GitHub, further saving server bandwidth and simplifying the API server structure. I have created a boilerplate for this structure.

Key points​

  • GitHub has a feature that automatically posts the docs folder into a small landing page.
  • Create-React-App builds its results into a build folder.
  • So if I can automatically move files from /build to /docs whenever I build the app, it would work as if I have set up a CI/CD structure.

Implementation​

"scripts": {
"start": "react-scripts start",
"build": "react-scripts build && rm -rf docs && mv build docs",
"test": "react-scripts test --verbose",
"eject": "react-scripts eject"
},

The yarn build command will replace the docs folder with a newer build of the app.

Result​

πŸ“œOld Post Ahead!
  • I wrote this post more than 1 year ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

πŸ’¬Work in Progress
  • This is a work in progress. Please check back later.
  • Usually, this is because I am translating this post to other languages.
  • Try looking for this post in other languages, if you are multilingual.
πŸ“œOld Post Ahead!
  • I wrote this post more than 1 year ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

πŸ’¬Work in Progress
  • This is a work in progress. Please check back later.
  • Usually, this is because I am translating this post to other languages.
  • Try looking for this post in other languages, if you are multilingual.
πŸ“œOld Post Ahead!
  • I wrote this post more than 1 year ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

πŸ’¬Work in Progress
  • This is a work in progress. Please check back later.
  • Usually, this is because I am translating this post to other languages.
  • Try looking for this post in other languages, if you are multilingual.
πŸ“œOld Post Ahead!
  • I wrote this post more than 1 year ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

So, this blog runs on Ghost. At the footer of this website, I wanted to keep the message "Ghost ${version} self-hosted on DigitalOcean distributed by Cloudflare." But that meant every time I updated Ghost, I had to manually update that string in my theme package and re-upload those. While I automated theme deployment with GitHub Actions (you can find the post here), it was a hassle to β‘  clone the theme package β‘‘ fix the string β‘’ commit and push it back. Then I thought it would be great to automatically insert the current Ghost version so that I don't have to update it every time manually. At first, I investigated the Ghost engine side to make the Node.js include the value before responding to the client browser, but I figured that there was a much simpler way after a while.

Extracting the Ghost version on client-side

Every Ghost blog includes a tag like the following for SEO and statistical reasons unless you manually disabled it.

<meta name="generator" content="Ghost 3.13">

That content thing was what I wanted to use. Extract that value with JS.

document.getElementsByName("generator")[0].content;

Of course, if you made some other HTML tag with a name generator before this generator, this wouldn't work. But you really shouldn't do that – generator tags should only be used by automatic software and aren't supposed to be edited. So either leave this tag as-is or remove it altogether.

Displaying the extracted Ghost version

The footer's HTML is generated with a handlebars file.

{
{
{
t "{ghostlink} self-hosted on {cloudlink} distributed by {CDN}"
ghostlink = "<a href = \"https://github.com/TryGhost/Ghost\">Ghost</a>"
cloudlink = "<a href = \"https://www.digitalocean.com/\">DigitalOcean</a>"
CDN="<a href=\"https://www.cloudflare.com/\">Cloudflare</a>"
}
}
}.

I added an id property to ghostlink.

ghostlink="<a id = \"ghost-version\" href=\"https://github.com/TryGhost/Ghost\">Ghost</a>"

Then input the string to the corresponding tag with JS.

<script>
document.getElementById("ghost-version").innerText = document.getElementsByName("generator")[0].content;
</script>

Paste this to Admin Panel β†’ Code Injections β†’ Site Footer.

You are good to go. See this in action down at the footer. ↓

One less hard-coded magic number!

πŸ“œOld Post Ahead!
  • I wrote this post more than 1 year ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

πŸ’¬Work in Progress
  • This is a work in progress. Please check back later.
  • Usually, this is because I am translating this post to other languages.
  • Try looking for this post in other languages, if you are multilingual.
πŸ“œOld Post Ahead!
  • I wrote this post more than 2 years ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

The goal is to...

  1. Send notifications on Installation and Updates of a given Chrome Extension (with different content, of course)
  2. Open specific links when notifications are clicked.

Sending Notifications​

var extensionPage = 'https://chosunghyun.com/youtube-comment-language-filter'
var updateLogPage = 'https://chosunghyun.com/youtube-comment-language-filter/updates'

chrome.runtime.onInstalled.addListener(function (object) {
if (object.reason === 'install') {
chrome.notifications.create(extensionPage, {
title: 'YCLF is now installed 😎',
message: 'Click here to learn more about the extension!',
iconUrl: './images/min-icon128.png',
type: 'basic',
})
} else if (object.reason === 'update') {
chrome.notifications.create(updateLogPage, {
title: 'YCLF updated to v' + chrome.runtime.getManifest().version + ' πŸš€',
message: "Click here to check out what's new!",
iconUrl: './images/min-icon128.png',
type: 'basic',
})
}
})

Also available on GitHub

  • Note that iconUrl should be the path from manifest.json to the image file, not from the background script.
  • You can use chrome.runtime.getManifest().version it to get the version of the extension.
  • If you want to send notifications from anywhere else than the background script, you must have a communication module between the notification sender and the background script to pass the notification details. Create a notification at background.js with that given detail. Sending notifications directly from content.js seems restricted. Check this post for more information.

Generally, you would need an event listener for each notification. However, there is a neat way to reduce duplicate codes.

chrome.notifications.onClicked.addListener(function (notificationId) {
chrome.tabs.create({ url: notificationId });
});

The trick is to store the link in notificationId field and attach an event listener to the notifications. This way, you can only use one event listener to open multiple types of links.

Additional Readings​

Note: Added June 19, 2020​

It doesn't seem that this is the ultimate answer. While the notification opens up the intended page when the user clicks the notification right after it pops up, the notification does not open up the page on click if the notification is sent to the notification center. This post will be updated if I find a better solution.

πŸ“œOld Post Ahead!
  • I wrote this post more than 2 years ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

πŸ’¬Work in Progress
  • This is a work in progress. Please check back later.
  • Usually, this is because I am translating this post to other languages.
  • Try looking for this post in other languages, if you are multilingual.
πŸ“œOld Post Ahead!
  • I wrote this post more than 2 years ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

If your Ghost CMS blog using Handlebars theme shows published dates in relative time (like Published 11 months ago), you will find a handlebars code like this in your theme file.

<time datetime='{{date format='YYYY-MM-DD'}}'>
{{date published_at timeago='true'}}
</time>

Show exact date​

The date published_at timeago="true" is responsible for the relative time. Change it to this.

<time datetime='{{date format='YYYY-MM-DD'}}'>
{{date published_at format='MMMM DD, YYYY'}}
</time>

This will give something like September 7, 2000.

Show exact time​

You can use [moment.js](https://momentjs.com/) syntax for fine-tuning the details.

<!-- 2000 September 07 9:00:00 PM -->
<time datetime='{{date format='YYYY-MM-DD hh:mm:ss A'}}'>
{{date published_at format='YYYY MMMM DD hh:mm:ss A'}}
</time>

<!-- 2000 09 07 9:00 PM -->
<time datetime='{{date format='YYYY-MM-DD hh:mm A'}}'>
{{date published_at format='YYYY MM DD hh:mm A'}}
</time>

<!-- 2000 09 07 21:00 -->
<time datetime='{{date format='YYYY-MM-DD hh:mm'}}'>
{{date published_at format='YYYY MM DD hh:mm'}}
</time>

For months, use MM for short notations (like 09) and MMMM for more extended notations (like September.) The basic syntax is for hours, minutes, seconds, and AM/PM if you want to display time. For example, I am using the following.

<time datetime='{{date format='YYYY-MM-DD h:mm A'}}'>
{{date published_at format='YYYY/MM/DD h:mm A'}}
</time>

Further readings​

πŸ“œOld Post Ahead!
  • I wrote this post more than 2 years ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

πŸ’¬Work in Progress
  • This is a work in progress. Please check back later.
  • Usually, this is because I am translating this post to other languages.
  • Try looking for this post in other languages, if you are multilingual.
πŸ“œOld Post Ahead!
  • I wrote this post more than 2 years ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

import os
import math


def getFileSize(path, kilo=1024, readable=False, shortRead=False):
size = 0
sizeArr = []
units = ["B", "KB", "MB", "GB", "TB", "PB", "EB"]
if os.path.isdir(path):
for dirpath, dirnames, filenames in os.walk(path):
for i in filenames:
size += os.path.getsize(os.path.join(dirpath, i))
elif os.path.isfile(path):
size += os.path.getsize(path)
unit = math.floor(math.log(size, kilo))
for k in range(0, unit + 1):
sizeArr.append(
math.floor((size % kilo ** (k + 1)) / kilo ** k)
)

if readable:
sizeString = ""
if not shortRead:
for x in range(unit, -1, -1):
sizeString += str(sizeArr[x]) + units[x] + " "
return sizeString[:-1]
else:
return (
str(sizeArr[-1])
+ "."
+ str(math.floor(sizeArr[-2] / 1.024))
+ units[len(sizeArr) - 1]
)
else:
return sizeArr

Examples​

Reference​

  • C:\Users\anacl\OneDrive\Documents (Folder): 3.13GB (3,366,343,239 Bytes)
  • C:\Users\anacl\OneDrive\Pictures (Folder): 83.4MB (87,468,781 Bytes)
  • C:\Users\anacl\OneDrive\Pictures\screenshot.png (File): 139KB (143,262 Bytes)

Default​

print(getFileSize("C:\\Users\\anacl\\OneDrive\\Documents"))
print(getFileSize("C:\\Users\\anacl\\OneDrive\\Pictures"))
print(
getFileSize(
"C:\\Users\\anacl\\OneDrive\\Pictures\\screenshot.png"
)
)

# Expected Output
# [583, 404, 138, 3]
# [749, 426, 83]
# [926, 139]

Each element in the returned list is the value of [B, KB, MB, GB, ...] of the file size.

Full Readable Output​

print(
getFileSize(
"C:\\Users\\anacl\\OneDrive\\Documents", readable=True
)
)
print(
getFileSize("C:\\Users\\anacl\\OneDrive\\Pictures", readable=True)
)
print(
getFileSize(
"C:\\Users\\anacl\\OneDrive\\Pictures\\screenshot.png",
readable=True,
)
)
# Expected Output
# 3GB 138MB 404KB 583B
# 83MB 426KB 749B
# 139KB 926B

Short Readable Output​

print(
getFileSize(
"C:\\Users\\anacl\\OneDrive\\Documents",
readable=True,
shortRead=True,
)
)
print(
getFileSize(
"C:\\Users\\anacl\\OneDrive\\Pictures",
readable=True,
shortRead=True,
)
)
print(
getFileSize(
"C:\\Users\\anacl\\OneDrive\\Pictures\\screenshot.png",
readable=True,
shortRead=True,
)
)
# Expected Output
# 3.134GB
# 83.416MB
# 139.904KB
πŸ“œOld Post Ahead!
  • I wrote this post more than 2 years ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

πŸ’¬Work in Progress
  • This is a work in progress. Please check back later.
  • Usually, this is because I am translating this post to other languages.
  • Try looking for this post in other languages, if you are multilingual.
πŸ“œOld Post Ahead!
  • I wrote this post more than 2 years ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

While Chrome and Firefox are two very different browsers, Chrome Extension and Firefox Add-on are now more similar than ever. Therefore, it is possible to transplant a Chrome extension to a Firefox Add-on and publish it to the Mozilla store with minor changes. This post is how I transplanted my YouTube Comment Language Filter to Firefox.

Checking the Chrome incompatibilities​

First of all, Firefox can run commands with chrome namespace, such as chrome.tabs.onUpdated. However, there are still a few codes that Firefox cannot run. Firefox offers a handy website to check the chrome incompatibilities.

  1. On your Chrome browser (or on any equivalent Chromium browsers,) visit chrome://extensions.
  2. Enable Developer Mode and Press Pack Extension.
  3. Select your extension directory and pack your extension. That will create a .crx file.
  4. Visit the Firefox Extension Test website and upload your .crx file.
  5. If it says there is no problem, then you are fine.

If there is any problem, I advise you to visit the MDN docs and see what code caused the problem. I didn't have any problem, so I cannot share any experience here.

Adding Firefox Manifest ID​

Firefox also requires an ID inside the manifest.json file. It is like the following.

"browser_specific_settings": {
"gecko": {
"id": "addon@example.com",
"strict_min_version": "42.0"
}
},

As you can see, you can also add a strict_min_version here. See original MDN docs.

This was a minor hassle since Chrome could not recognize the above code block. So you need to keep two manifest.json, one with the above code block (for Firefox) and one without it (for Chrome). If I find a more straightforward way, I will add it here.

Uploading it to the Firefox Add-on Store​

  1. Visit https://addons.mozilla.org/.
  2. Log in to your developer account (or create a developer account).
  3. Visit Firefox Submit a New Add-on page.
  4. Follow the guidelines on the screen.

One little tip: make sure you don't include any unnecessary files .DS_Store or anything like that. Using macOS's default Finder compressor will sometimes have these files. I recommend using Keka.

Update​

  • It seems that you don't necessarily need a Firefox manifest ID. Therefore – submit the Chrome version, and 99% will work (If you didn't get any warning on the Firefox Extension Test website).
πŸ“œOld Post Ahead!
  • I wrote this post more than 2 years ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

πŸ’¬Work in Progress
  • This is a work in progress. Please check back later.
  • Usually, this is because I am translating this post to other languages.
  • Try looking for this post in other languages, if you are multilingual.
πŸ“œOld Post Ahead!
  • I wrote this post more than 2 years ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

I recently found this:

This is just a thought. But it might be nice to have some sort
of easter egg message in here for the hard core Apple fans that
will stop the video.

01010011 01101111 00100000 01111001 01101111 01110101
00100000 01110100 01101111 01101111 01101011 00100000
01110100 01101000 01100101 00100000 01110100 01101001
01101101 01100101 00100000 01110100 01101111 00100000
01110100 01110010 01100001 01101110 01110011 01101100
01100001 01110100 01100101 00100000 01110100 01101000
01101001 01110011 00111111 00100000

01010111 01100101 00100000 01101100 01101111 01110110
01100101 00100000 01111001 01101111 01110101 00101110

So I made a short script.

egg = '''
01010011 01101111 00100000 01111001 01101111 01110101
00100000 01110100 01101111 01101111 01101011 00100000
01110100 01101000 01100101 00100000 01110100 01101001
01101101 01100101 00100000 01110100 01101111 00100000
01110100 01110010 01100001 01101110 01110011 01101100
01100001 01110100 01100101 00100000 01110100 01101000
01101001 01110011 00111111 00100000

01010111 01100101 00100000 01101100 01101111 01110110
01100101 00100000 01111001 01101111 01110101 00101110
'''.split()

for e in egg:
print(chr(int(e,2)), end="")
print()

It said...

So you took the time to translate this? We love you.

πŸ“œOld Post Ahead!
  • I wrote this post more than 2 years ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

πŸ’¬Work in Progress
  • This is a work in progress. Please check back later.
  • Usually, this is because I am translating this post to other languages.
  • Try looking for this post in other languages, if you are multilingual.
πŸ“œOld Post Ahead!
  • I wrote this post more than 2 years ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

πŸ’¬Work in Progress
  • This is a work in progress. Please check back later.
  • Usually, this is because I am translating this post to other languages.
  • Try looking for this post in other languages, if you are multilingual.
πŸ“œOld Post Ahead!
  • I wrote this post more than 2 years ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

The only thing I missed about my Windows computer was locking the screen since I was so used to locking my computer with ⊞Win+L. Mac offered an alternative called the Hot Corner. But it never was so intuitive and fast as pressing ⊞Win+L. However, Mac now supports locking the computer by pressing ⌘Command+L from macOS Mojave.

How do I do it?​

  1. Go to System Preferences.

2. Go to Keyboard β†’ Shortcuts β†’ App Shortcuts and press + at the bottom. 3. On your Menu Bar, press the ο£Ώ Apple Logo. Check the name of ^Control+⌘Command+Q. It is responsible for locking your Mac. Remember the name of the menu. 4. Go back to the Preference app. Select All Applications at Application Setting. Next, enter the Menu Title you have just checked. This title will vary according to your macOS Language Preference. Finally, enter ⌘Command+L at Keyboard Shortcut. You can enter it here if you want to lock your Mac with Keyboard Shortcut other than ⌘Command+L. Press Add if you are finished. 5. Now you can see that the same Lock Screen option at your menu bar will show the Keyboard Shortcut you just changed. This method works in almost every case. Sometimes, the app will have ⌘Command+L as its Keyboard Shortcut. One example is the System Preferences app that uses the ⌘Command+L as going to the Lobby of System Preferences. I have never seen any other cases where ⌘Command+L doesn't work as expected.

  • If some app uses ⌘Command+L as their default shortcuts, you can set them to some other random shortcuts, clearing the path for the Lock Screen shortcut.

  • You can now click the Touch ID button from macOS Big Sur to lock your Mac.
πŸ“œOld Post Ahead!
  • I wrote this post more than 2 years ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

I recently came across a Korean font called Spoqa Han Sans. It attracted me due to its simplicity and readability. However, I didn't like its English glyph (based on Noto Sans, which I didn't love to death.)

After a while, I figured out how to define a language range for each font. While importing a font face, we need to add unicode-range.

@font-face {
font-family: 'Spoqa Han Sans';
/* Omitted */
unicode-range: U+AC00-D7AF;
}

U+AC00-D7AFis the Unicode range of Korean glyphs.

Can't find @font-face?​

We add either of the following statements to define a font in most cases.

<link href="//spoqa.github.io/spoqa-han-sans/css/SpoqaHanSans-kr.css" rel="stylesheet" type="text/css" />
@import url(//spoqa.github.io/spoqa-han-sans/css/SpoqaHanSans-kr.css);

This way has the advantage that I do not need to care about updating the font. It automatically updates when the font is updated at the font server (Of course, this could also be a disadvantage in some cases). But we cannot define unicode-range in this case. So rather than importing the scripts like above, we could copy & paste the actual script itself. Access the URL provided in your @import statement with https: in front of it, and you will find the license of the font and its @font-face statements.

Copy this value and add them to your CSS, then define the unicode-range. Note that this method will not automatically update your font, but you can always recopy & repaste when the @font-face in your @font-face gets updated.

πŸ“œOld Post Ahead!
  • I wrote this post more than 2 years ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

Ghost opens every external URL on the same page by default. This behavior distracts the user and increases the bounce rate. While it seems that there is no built-in option on Ghost to open external links in a new tab, we can fix this by injecting a short snippet of code.

<script>
var links = document.querySelectorAll('a');
for (var i = 0; i < links.length; i++) {
if (links[i].hostname != window.location.hostname) {
links[i].target = '_blank';
links[i].rel = 'noopener';
}
}
</script>

Paste this code at Ghost Settings β†’ Code injection β†’ Site Footer.

How it works​

Contain every link in an array. For every link, if the link's hostname is different from this Ghost's hostname, change the target and rel values.

Changing the target to _blank will do the job. However, this will run the external link in the same Ghost process, leading to possible performance drops and security risks. We can prevent this by setting the rel value as noopener.

While modifying every link with JavaScript whenever accessing the page might slow down your Ghost, the performance impact will be ignorable unless the page has many external links. This trick will do its job until Ghost provides us a default option to open links in a new tab.

Additional readings​

πŸ“œOld Post Ahead!
  • I wrote this post more than 2 years ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

πŸ’¬Work in Progress
  • This is a work in progress. Please check back later.
  • Usually, this is because I am translating this post to other languages.
  • Try looking for this post in other languages, if you are multilingual.
πŸ“œOld Post Ahead!
  • I wrote this post more than 2 years ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

Building a payment system for school festivals

MinsaPay is a payment system that was built for the Minjok Summer Festival. It works like a prepaid tap-to-pay card. Every source and piece of anonymized transaction data is available on GitHub.

Stats​

But why does a school festival need a payment system?​

My high school, Korean Minjok Leadership Academy (KMLA), had a summer festival like any other school. Students opened booths to sell food and items they created. We also screened movies produced by our students and hosted dance clubs. The water party in the afternoon is one of the festival's oldest traditions.

Because there were a lot of products being sold, it was hard to use regular paper money (a subsequent analysis by the MinsaPay team confirmed that the total volume of payments reached more than $4,000). So our student council created proprietary money called the Minjok Festival Notes. The student council had a dedicated student department act as a bank to publish the notes and monitor the currency's flow. Also, the Minjok Festival Notes acted as festival memorabilia since each year's design was unique.

Building a payment system for school festivals

The Minjok Festival Note design for 2018 had photos of the KMLA student council members at the center of the bill. The yellow one was worth approximately $5.00, the green one was worth $1.00, and the red one was worth 50 cents.

But there were problems. First, it was not eco-friendly. Thousands of notes were printed and disposed of annually for just a single day. It was a waste of resources. The water party mentioned above was problematic as well. The student council made Minjok Festival Notes out of nothing special, just ordinary paper. That made the notes extremely vulnerable to water, and students lost a lot of money after the water party. Eventually, the KMLA students sought a way to resolve all of these issues.

Idea​

The student council first offered me the chance to develop a payment system. Because I had thought about the case beforehand, I thought it made a lot of sense. I instantly detailed the feasibility and possibilities of the payment system. But even after designing the system in such great detail that I could immediately jump into the development, I turned down the offer.

I believe in the social responsibilities of the developer. Developers should not be copy-pasters who meet the technical requirements and deliver the product. On the contrary, they are the people with enormous potential to open an entirely new horizon of the world by conversing with computers and other technological media. Therefore, developers have started to possess the decisive power to impact the daily lives of the rest of us, and it is their bound responsibility to use that power to enhance the world. That means developers should understand how impactful a single line of code can be.

Of course, I was tempted. But I had never done a project where security was the primary interest. It was a considerable risk to start with a project like this without any experience or knowledge in security. Many what-ifs flooded my brain. What if a single line of code makes the balance disappear? What if the payment record gets mixed up? What if the server is hacked? More realistically, what if the server goes down?

People praise audacity, but I prefer prudence. Bravery and arrogance are just one step apart. A financial system should be flawless (or as flawless as possible). It should both be functional and be performing resiliently under any condition. It didn't seem impossible. But it was too naΓ―ve to believe nothing would happen, as I was (and am still) a total newbie in security. So I turned it down.

Wait, payment system using Google Forms?​

The student council still wanted to continue the project. I thought they would outsource the task to some outside organization. It sounded better since they would at least have some degree of security. But the council thought differently. They were making it themselves with Google Forms.

When I was designing the system, the primary issue was payment authorization. The passcode shouldn't be shared with the merchant, while the system could correctly authorize and process the order. The users can only use the deposited money in their accounts. This authorization should happen in real-time. But I couldn't think of a way to nail the real-time authorization with Google Forms. So I asked for more technical details from one student council member. The idea was as follows:

Abstract of a Google-Form-Powered Payment System​

  • Create one Google Form per user. (We have about 400 users in total.)
  • Create QR codes with links to the Google Form. (So it's 400 QR codes in total.)
  • Create a wristband with the QR code, and distribute them to the users.
  • Show that wristband when purchasing something.
  • The merchant scans the QR code and opens the link in incognito mode.
  • Input the price and the name of the booth.
  • Confirm with the user (customer) and submit the response.
  • Close the incognito tab.

So the idea was to use the Google Form's unique address as a password. Since the merchants are supposed to use incognito mode, there should be a safety layer to protect the user's Google Form address (in theory). They will need to make a deferred payment after the festival. But as a developer, this approach had multiple problems:

Potential Problems I found​

  • How are we going to manage all 400 Google Forms?
  • Intended or not, people will lose their wristbands. In that case, we will need to note the owner of the wristband in every Google form to calculate the spending. Can we deliver those QR codes to the correct owner if we do?
  • If the merchant doesn't use incognito mode, it will be hard for an ordinary person to tell the difference. If that happens, it is possible to attack the exposed Google form by submitting fake orders. We could also add a "password," but in that case, we cannot stop the customer from providing an incorrect password and claiming that they were hacked by someone else.
  • If the merchant has to select the booth and input the price manually, there will be occasions where they make a typo. Operators could fix a typo in the price value relatively quickly, but a typo or misselection in the booth value would be a pain since we would have to find out who made a mistake and the original order. Imagine there were 20 wrong booth values. How are we going to trace the real booth value? We could guess, but would that sort of record have its value as reliable data?
  • How are we going to make the deferred payment? How will we extract and merge all 400 of the Google Forms response sheets? Even worse, the day after the festival is a vacation. People care about losing money but not so much about paying their debts. There could be students who just won't come back. It would be excruciating to notify all those who didn't deliver. But if the money is prepaid, the solution is comparably easy. The council members could deposit the remaining balance to their phone number or bank account. We don't need to message dozens of students; we could do the work ourselves.
  • The student council will make the Google Form with the student council's Google account. That Google account will have restricted access, but a few students will be working together to create all 400 Google forms. Can we track who makes the rogue action if someone manipulates the Google form for their benefit?
  • Can this all be free from human error?

It could work in an ideal situation. But it will accompany a great deal of confusion and entail a noticeable discomfort on the festival day. That made me think that even though my idea had its risks, mine would still be better. So, I changed my mind.

Development​

Fortunately, I met a friend with the same intentβ€”our vision and idea about the project aligned. I explained my previous concept, and we talked to each other and co-developed the actual product. We also met at a cafe several times. I set up and managed the DNS and created the front-end side. Below are the things we thought about while making the product.

Details that my team considered​

  • We won't be able to use any payment gateway or third-party payment service since we are not officially registered, and we will use it for a single day. Some students don't own smartphones, so we won't be able to use Toss or KakaoPay (Both are well-known P2P payment services in South Korea, just like Venmo). Therefore, there cannot be any devices on the client-side. We would need to install computers on the merchant's side.
  • It is impossible to build a completely automated system. Especially in dealing with cash, we would need some help from the student council and the Department of Finances and Information. Trusted members from the committee will manually count and deposit the money.
  • There must be no errors in at least the merchant and customer fields since they would be the most difficult errors to fix later. But, of course, we cannot expect that people will make no mistakes. So, instead, we need to engineer an environment where no one can make a mistake even if they want to.
  • The booths may be congested. If each customer needs to input their username and password every time, that will pose a severe inconvenience. For user experience, some sort of one-touch payment would be ideal.
  • For this, we could use the Campus ID card. Each card has a student number (of course) and a unique value for identifying students at the school front door. We could use the number as the username and the unique value as the password. Since this password is guaranteed to be different for each student, we would only need the password for identification purposes.
  • The final payment system would be a prepaid tap-to-pay card.
  • Developers would connect each account with its owner's student ID.
  • Students could withdraw the remaining money after the festival.

We disagreed on two problems.

  1. One was the platform. While my partner insisted on using Windows executable programs, I wanted the system to be multi-platform and asked to use web apps. (As you might expect, I use a Mac.)
  2. The other was the method of reading data from the Campus ID card. The card has an RFID chip and a bar code storing the same value. If we chose RFID values, we would have to purchase ten RFID readers, spending an additional $100. Initially, I insisted on using the embedded laptop webcam to scan the barcode because MinsaPay was a pilot experiment at that time. I thought that such an expense would make the entire system questionable in terms of cost-effectiveness. (I said "Wait, we need to spend an additional $100 even though we have no idea if the system will work?")

We chose web and RFID, conceding one for each. I agreed to use RFID after learning that using a camera to read bar codes wasn't that fast or efficient.

Building a payment system for school festivals

Main Home, Admin Page, and Balance Check Page of the product.

And it happened​

Remember that one of the concerns was about the server going down?
On the festival day, senior students had to self-study at school. Then at one moment, I found my phone had several missed calls. The server went down. I rushed to the festival and sat in a corner, gasping and trying to find the reason. Finally, I realized the server was intact, but the database was not responding.
It was an absurd problem. (Well, no problem is absurd, per se, but we couldn't hide our disappointment after figuring out the reason.) We thought the free plan would be more than enough when we constructed our database. However, the payment requests surged and exceeded the database free tier. So we purchased a $9.99 plan, and the database went back to work. It was one of the most nerve-wracking events I ever had.

Building a payment system for school festivals

The moment of upgrading the database plan. $10 can cause such chaos!

While the server was down, each booth made a spreadsheet and wrote down who needed to pay how much. Afterward, we settled the problem by opening a new booth for making deferred payments.

The payment log showed that the server went down right after 10:17:55 AM and returned at 10:31:10 AM. It was evident yet intriguing that the payments made per minute were around 10 to 30 before the crash but went down to almost zero right after restoring the server. If you are interested, please look here.

Building a payment system for school festivals

Due to exceeding the database free tier, the server went down for 13 minutes and 15 seconds after payment #1546.

Results​

1. MinsaPay​

The entire codebase for MinsaPay is available on GitHub. First, though, I must mention that I still question the integrity of this system. One developer reported a security flaw that we managed to fix before launch. However, the system has unaddressed flaws; for example, though unlikely, merchants can still copy RFID values and forge ID cards.

2. Payment Data​

I wanted to give students studying data analysis more relatable and exciting data. Also, I wanted to provide financial insights for students planning to run a booth the following year. Therefore, we made all payment data accessible.

However, a data privacy problem arose. So I wrote a short script to anonymize personal data. If a CSV file is provided, it will anonymize a selected column. Identical values will have the same anonymized value. You can review the anonymized data here.

Note for Developers​

I strongly recommend thoroughly auditing the entire code or rewriting it if you use this system. MinsaPay is under the MIT license.

What I Learned​

There is ample room for improvement.

First, there are codes with numerous compromises. For example, we made a lot of trade-offs not to miss the product deadline (the festival day). We also wanted to include safety features, such as canceling payments, but we didn't have time. More time and development experience would have improved the product.

Since I wasn't comfortable with the system's security, I initially kept the repository quiet and undisclosed. Afterward, however, I realized this was a contradiction, as I knew that security without transparency is not the best practice.

Also, we were not free from human errors. For example, RFID values were long strings of digits, and there were a few mistakes that someone would input in the charge amount, making the charge amount something like Integer.MAX_VALUE. We could've added a simple confirmation prompt, but we didn't know the mistakes would happen at that time.

In hindsight, it was such a great experience for me, who had never done large-scale real-life projects. I found myself compromising even after acknowledging the anti-patterns. I also understood that knowing and doing are two completely different things since knowing has no barriers, but doing accompanies extreme stress both in time and environment.

Still, it was such an exciting project.

Lastly, I want to thank everyone who made MinsaPay possible.

  • Jueon An, a talented developer who created MinsaPay with me
  • The KMLA student council and Department of Finances and Information, who oversaw the entire MinsaPay Initiatives
  • The open-source developers who reported the security flaws
  • Users who experienced server failures during the festival day
  • And the 400 users of MinsaPay

Thank you!

Building a payment system for school festivals

πŸ‘‹

πŸ“œOld Post Ahead!
  • I wrote this post more than 3 years ago.
  • That's enough time for things to change.
  • I might not agree with this post anymore.
Google Latest Articles Instead

danger

This post was written when I had little experience with Node.js. This is not an advisable way. This post simply serves as a departure post for my journey.

I have used AWS Elastic Beanstalk for a while and figured Heroku has several advantages over AWS. So I have migrated my AWS EB app called KMLA Forms to Heroku. For your information, KMLA Forms is a web app that simplifies writing necessary official documents in my school, KMLA.

Few advantages I found:

  1. Money. AWS costs money after its free tier limit. Since some of the students in my school used my web app, I have got some traffic, and AWS started to ask me about ~$10/month. As far as I know, there is no free tier traffic limit on Heroku.
  2. Native HTTPS support. Heroku natively supports HTTPS since every dyno app can use Heroku's domain. AWS EB, on the other hand, does not provide this. You need to configure your web domain and HTTPS Certificate for each web domain. Not valid for casual developers.

I had to make only minimal changes to app.js and package.json.

AWS Version​

// ...

http.createServer(app).listen(8081, '0.0.0.0')

console.log('Server up and running at http://0.0.0.0:8081')

Heroku Version​

// ...

const port = process.env.PORT || 8000

// ...

app.listen(port, () => {
console.log('App is running on port ' + port)
})

Also, I have added "start": "node app.js" in package.json. Codes are on GitHub, and the web is launched here.