Work in Progress
- This is a work in progress. Please check back later.
- Usually, this is because I am translating this post to other languages.
- Try looking for this post in other languages, if you are multilingual.
After a few years of technical writing, I felt limitations on writing platforms that hindered me from writing the best-class articles. Technological knowledge is dynamic and intertwined in that none of the current formats – academic papers, lecture videos, code examples, or straightforward posts – can best represent the knowledge. I have examined and observed some attempts that addressed this issue, namely, stuff called the second brain or digital gardens, but none of them seemed to correctly solve the problem. Therefore, I have distilled my inconveniences into this huge mega-post and imagined what I would've done if I had created the new incarnations of digital brains.
Since this post, I have extensively studied non-linear PKM software, such as Roam, Obsidian, Logseq, and Foam. I acknowledge that I misunderstood the concept of manual linking; that PKM software performs a fuzzy search to intelligently identify linked and unlinked references. I found some PKM software with automatic linkings, such as Saga or Weavit. But none of them worked how I expected. Manual linking helps refine the database. So, even if I make a Next-gen digital brain, I will not remove the linking process.
[[keyword]]
pattern prevalent in so-called second brains (obsidian, dendron, ...).Victor Dibia. Seems like using MDX.
아빠는 개발자. Confirmed using MDX.
pomb.us. Reacts to user scroll.
qubit.donghwi.dev. This isn't a blog; it's a webapp that demonstrates key concepts of Quantum Computers. But still interesting.
Trust me, manually fiddling with tag sucks.
Necessarily tagging posts and organizing posts into subdirectories resembles organizing your computer.
However, you wouldn't want to do this if you have thousands of posts; also the border gets loose. What if the post has two properties? What becomes the primary tag and what becomes the secondary tag?
Students who grew up with search engines might change STEM education forever
Notable trends. Gen Z's don't organize folders anymore!
Recent trends, I would say, are dumping everything into a mega folder and searching up things whenever needed.
I also used to organize folders a lot more, but recently as searches like Spotlight and Alfred improve, I don't see the need to manage them all by hand, considering I always pull up those search commands to open a file.
You don't need to manually organize all of the files when algorithms can read all the texts and organize them for you!
Use algorithmic inspections to analyze how the posts may interrelate with each other properly.
Velog, the Korean version of dev.to, links relevant posts for every post.
Therefore creating a cluster of posts, not classified by me, but bots and algorithms.
This is similar to backlinking, which most so-called digital brains such as Obsidian and Dendron are doing.
Example of backlinking from Dendron
Example open graph image from GitHub
While supporting multilanguage and translations, I want to put some 3D WebGL globe graphics. Remember infrastructure.aws in 2019? It used to show an awesome 3D graphic of AWS's global network.
I kinda want this back too. Meanwhile, this looks nice:
Also made some contributions...
I want to go with the standard SF Pro series with a powerful new font Pretendard.
font-family: ui-sans-serif, -apple-system, BlinkMacSystemFont, 'Apple SD Gothic Neo', Pretendard, system-ui -system-ui, sans-serif, 'Apple Color Emoji';
However, I am exploring other options.
I liked TossFace's bold attempt to infuse Korean values into the Japan-based emoji system for emoji. (lol, but they canceled it.)
Honestly, I kinda want this back. They can use Unicode Private Use Area. But Toss is too lazy to do that considering they still didn't make the WOFF version Webfont.
So I might use Twemoji.
Update, I submitted a formal request to Toss to bring these Korean Emojis back.
유니코드 Private Use Area를 이용해 한국적, 시대적 가치를 담은 이모지 재배포 · Issue #4 · toss/tossface
cho.sh/blog/how-to-make-apple-music-clone
. What if I need to update the title and want to update the URL Structure?This also looks cool for MD/MDX
bool doesItHalt({function f, input i})
that returns if the parameter function f(i)
halts or not.pair duplicator(input i) {
return {i, i}
}
bool invertHalt(bool b) {
if(b) {
while(true); // hangs forever
return 0;
} else {
return 0;
}
}
f(i)
halts, the invertHalt
will hang (i.e., wouldn't halt), and if f(i)
hangs, the invertHalt
will halt.bool unknown(input i) {
auto a = duplicator(i) // a = {i, i}
auto b = doesItHalt(a) // does i(i) halt?
auto c = invertHalt(b) // hangs if i(i) halts and vice versa.
}
unknown(unknown)
halt? What should doesItHalt({unknown, unknown})
return?true
. Then, it means that doesItHalt({unknown, unknown})
returned false
because invertHalt(b)
would've hung otherwise. Therefore, this contradicts our supposition that doesItHalt({unknown, unknown})
returns true
.false
. Then, it means that doesItHalt({unknown, unknown})
would return true
because invertHalt
wouldn't hang otherwise. Therefore, this contradicts our supposition that doesItHalt({unknown, unknown})
returns false
.unknown
cannot hang nor halt; therefore, no such doesItHalt
can exist.~/.config/karabiner/assets/complex_modifications/keyboard.json
keyboard.json
{
"title": "Caps Lock → Hyper Key (control+shift+option) (F16 if alone)",
"rules": [
{
"description": "Caps Lock → Hyper Key (control+shift+option) (F16 if alone)",
"manipulators": [
{
"from": {
"key_code": "caps_lock"
},
"to": [
{
"key_code": "left_shift",
"modifiers": ["left_control", "left_option"]
}
],
"to_if_alone": [
{
"key_code": "f16"
}
],
"type": "basic"
}
]
}
]
}
hyper.json
{
"title": "Hyper Key Combinations",
"rules": [
{
"description": "Use Hyper + D to F13",
"manipulators": [
{
"type": "basic",
"from": {
"key_code": "d",
"modifiers": {
"mandatory": ["left_shift", "left_control"]
}
},
"to": [
{
"key_code": "f13"
}
]
}
]
},
{
"description": "Use Hyper + E to control + up_arrow",
"manipulators": [
{
"type": "basic",
"from": {
"key_code": "e",
"modifiers": {
"mandatory": ["left_shift", "left_control"]
}
},
"to": [
{
"key_code": "up_arrow",
"modifiers": ["left_control"]
}
]
}
]
}
]
}
keyboard.json
{
"title": "Multilingual Input Methods",
"rules": [
{
"description": "R Command to Gureum Han2",
"manipulators": [
{
"type": "basic",
"from": {
"key_code": "right_command",
"modifiers": {
"optional": ["any"]
}
},
"to": [
{
"key_code": "right_command",
"lazy": true
}
],
"to_if_alone": [
{
"select_input_source": {
"input_source_id": "org.youknowone.inputmethod.Gureum.han2"
}
}
]
}
]
},
{
"description": "L Command to Gureum Roman",
"manipulators": [
{
"type": "basic",
"from": {
"key_code": "left_command",
"modifiers": {
"optional": ["any"]
}
},
"to": [
{
"key_code": "left_command",
"lazy": true
}
],
"to_if_alone": [
{
"select_input_source": {
"input_source_id": "org.youknowone.inputmethod.Gureum.system"
}
}
]
}
]
}
]
}
language.json
Then I configured a bunch of shortcuts to fly through my Mac. Remember ⌃⌥⇧ is the so-called Hyper Key that I made, which uses the Caps Lock key or 한/영 키 (Korean-English Key). That is, because, I never use the Caps Lock key (I use shift) and I click the right command key to type Korean and click the left command key to type English, inspired by the Japanese Apple keyboard's Kana (かな) and English Key (英数).
Rectangle.app
Keyboard Maestro.app
gureum.app
Both NP and NP-Hard.
edit scheme
.ProcessInfo.processInfo.environment["KEY"]
.However, this didn't work for me. Refer to this problem on Stack Overflow.
ProcessInfo.processInfo.environment variables work in Simulator but not on Device
xcconfig
.xcconfig
and add them to app build settings..gitignore
.gitignore
that ignores all *Credentials.swift
file.Using Keychain Manager.
However, these are meant for storing personal sensitive data like usernames and passwords.
I am unsure if I can store data in Keychain without exposing it to the end-user or application (.ipa
) file.
I recently saw this Gist and Interactive Page, so I thought it would be cool to update it for the 2020s. This can serve as a visualization of how fast a modern computer is.
Imagine 1 CPU cycle took 1 second. Compared to that, Apple's M1 chip has a CPU cycle of 0.25 ns approx. That's 4,000,000,000 times difference. Now, imagine how M1 would feel one second in real life.
Action | Physical Time | M1 Time |
---|---|---|
1 CPU Cycle | 0.25ns | 1 second |
L1 cache reference | 1ns | 4 seconds |
Branch mispredict | 3ns | 12 seconds |
L2 cache reference | 4ns | 16 seconds |
Mutex lock | 17ns | 68 seconds |
Send 2KB | 44ns | 2.93 minutes |
Main memory reference | 100ns | 6.67 minutes |
Compress 1KB | 2μs | 2.22 hours |
Read 1MB from memory | 3μs | 3.33 hours |
SSD random read | 16μs | 17.78 hours |
Read 1MB from SSD | 49μs | 2.27 days |
Round trip in the same data center | 500μs | 23.15 days |
Read 1MB from the disk | 825μs | 38.20 days |
Disk seek | 2ms | 92.60 days |
Packet roundtrip from California to Seoul | 200ms | 25.35 years |
OS virtualization reboot | 5s | 633 years |
SCSI command timeout | 30s | 3,802 years |
Hardware virtualization reboot | 40s | 5,070 years |
Physical system reboot | 5m | 38,026 years |
Permissions 0644 for '~/.ssh/key.pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Command this for individual keys
sudo chmod 600 ~/.ssh/key.pem
Command this for the SSH Key folder
sudo chmod 700 ~/.ssh
Each digit represents the access privilege of User, Group, and Other.
7: 4(r) + 2(w) + 1(x) rwx read, write and execute 6: 4(r) + 2(w) rw- read and write 5: 4(r) + 1(x) r-x read and execute 4: 4(r) r-- read only 3: 2(w) + 1(x) -wx write and execute 2: 2(w) -w- write only 1: 1(x) --x execute only 0: 0 --- none
Therefore, chmod 600 means giving read and write access to the user and nothing to any other parties.
Giving 755 means giving full access to the user and read, execute access to any other parties.
Giving 777 🎰 means giving full access to everyone.
Note that Linux SSH manual says:
~/.ssh/
: This directory is the default location for all user-specific configuration and authentication information. There is no general requirement to keep the entire contents of this directory secret, but the recommended permissions are read/write/execute for the user and not accessible by others. (Recommends 700)~/.ssh/id_rsa
: Contains the private key for authentication. These files contain sensitive data and should be readable by the user but not accessible by others (read/write/execute). ssh will simply ignore a private key file if it is accessible by others. It is possible to specify a passphrase when generating the key, which will be used to encrypt the sensitive part of this file using 3DES. (Recommends 600)Notwithstanding the provisions of sections 17 U.S.C. § 106 and 17 U.S.C. § 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright.
// This code is under MIT License.
let video = document.querySelector('video').src
let download = document.createElement('a')
let button = document.createElement('button')
button.innerText = 'To Download Video: Right Click Here → Save Link As'
download.append(button)
download.href = video
download.setAttribute('download', video)
document.getElementsByClassName('transcript')[0].prepend(download)
In March of 2021, I have reported this to Zoom as I considered this a security matter. While anyone can technically record their screen to obtain a copy of the video, I thought the implications were different: when you can one-click to download the full video, and when it takes hours of effort to record the video and audio manually.
Furthermore, instructors can decide if they want to open up downloading the original copies. Therefore, this feature's whole purpose is to provide inconvenience to deter users from downloading files. In that sense, this code is a security bypass of that policy.
That's what I told Zoom HQ. They responded:
Thank you for your report. We have reproduced the behavior you have reported. However, while this UI does not expose the download URL for recordings which have opted to disable the download functionality, a user may still record the meeting locally using a screen-recording program. In addition, for the browser to be able to play the recording, it must be transmitted to the browser in some form, which an attacker may save during transmission, and so the prevention of this is non-trivial. We appreciate your suggestion and may look into making this change in the future, but at the moment, we consider this to be a Defense-In-Depth measure. With every fix, we must carefully weigh the usability tradeoffs of any additional security control. We are reasonably satisfied with our security at this time, and we have chosen not to make any changes to our platform for the time being. We will be closing this report, but we still want to thank you for all your effort in bringing this behavior to our attention. Thank you for thinking of Zoom security.
Well... It seems like they're not interested, and no patch will come soon. So, for the time being, use this code wisely, and abide by your laws!
Left Command
to set Mac's input method to English.Right Command
to set Mac's input method to Korean.~/.config/karabiner/assets/complex_modifications
. You can click Command+Shift+G
within Finder to open a goto
window.filename.json
).Karabiner-Elements.app
→ Complex Modifications and press add rules.Multilingual Input Methods
.{
"title": "Multilingual Input Methods",
"rules": [
{
"description": "R Command to 한글",
"manipulators": [
{
"type": "basic",
"from": {
"key_code": "right_command",
"modifiers": { "optional": ["any"] }
},
"to": [{ "key_code": "right_command", "lazy": true }],
"to_if_alone": [{ "select_input_source": { "language": "^ko$" } }]
}
]
},
{
"description": "L Command to English",
"manipulators": [
{
"type": "basic",
"from": {
"key_code": "left_command",
"modifiers": { "optional": ["any"] }
},
"to": [{ "key_code": "left_command", "lazy": true }],
"to_if_alone": [{ "select_input_source": { "language": "^en$" } }]
}
]
}
]
}
Notably:
Usually, an expensive labor value takes place when the demand for the work is very high, whereas the supply cannot increase. Moreover, health and legal issues regularly appear throughout our society (either injured or in a legal dispute), and it is doubtful for an individual to not benefit from either. In other words, these demands never vanish.
However, the supply always stagnates. Why?
In the end, supply always falls behind demand.
Two aspects ① Economic Efficiency ② Performance.
Making AI is expensive because:
Making an AI is also tricky regardless of the field. To slightly exaggerate, creating a cleaning AI is as hard as making a medical AI.
As a result, cleaning artificial intelligence also costs a lot of money. In other words, if it will be challenging to produce artificial intelligence anyways, you want a model that brings sufficient economic effects and versatile adaptability. Therefore, it is appropriate to train artificial intelligence for expensive labor to show this high financial return on investment level.
On the other hand, AI never forgets, and it can duplicate itself. Imagine:
SEOUL (Reuters) - South Korea's parliament on late Friday passed a controversial bill to limit ride-hailing service Tada, dealing a blow to a company that has been a smash hit since its launch in late 2018 but faced a backlash from taxi drivers angry over new mobility services. - South Korea passes bill limiting Softbank-backed ride-hailing service Tada | Reuters
Recent TADA Warfare exhibited a classic Alliance-versus-Megacorporation style of conflict. Taxi drivers eventually won, but it was a victory without victory --- since the winner was another conglomerate Kakao Mobility which finally took over the market.
Physicians and lawyers also show strong industry resistance. However, they also possess immense social power; one can easily imagine such scenarios:
In the animal kingdom, there was a naive monkey. One day, a badger came and presented colorful sneakers to a monkey. The monkey didn't need shoes but received them as a gift. After that, badgers continued offering sneakers, and the callus on the monkey's feet gradually thinned. Soon, the monkey, unable to go out without shoes, became dependent on the badger.
Start with a platform system that helps doctors and lawyers.
Like these, provide sneakers --- very essential and valuable tools for medical personnel and legal professionals. In other words, transform doctors and lawyers into our primary customers and data pipeline. When entering a robust market like the medical and legal circles, never engage in an all-out war. Instead, build cooperative relationships first, neutralize them, and then wage a full-scale war.
rootDir
is consistent.On my TypeScript Node Server, I suddenly got the following error on the tsc
command for production settings.
internal/modules/cjs/loader.js:{number}
throw err;
Error: Cannot find module '{project}/dist'
at ... {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
Then I stashed my works and started traveling back in time with git checkout HASH
. Comes out, the error started when I added MongoDB Models at src/models
.
It seemed strange since it had nothing to do with adding new modules or dependencies. Reinstalling node_modules
did not do the job for me (Relevant Stack Overflow Question here). Please take a look at my folder structure.
.
├── LICENSE
├── README.md
├── dist
├── package-lock.json
├── package.json
├── src
│ ├── models (Newly added. Started to cause error.)
│ │ └── user.ts (Newly added. Started to cause error.)
│ └── server
│ ├── config
│ │ ├── config.ts
│ │ ├── dev.env
│ │ ├── dev.env.sample
│ │ ├── prod.env
│ │ └── prod.env.sample
│ └── index.ts
└── tsconfig.json
Long story short, it was the problem in my tsconfig
. I have previously declared the following statement on my tsconfig
.
{
...
"include": ["src/**/*"]
}
However, since there was only /server
folder before creating the model, it seems that TSC has automatically set the root directory to src/server
. Therefore the dist
output seemed like the following.
dist
├── config
│ ├── config.js
│ └── prod.env
└── index.js
But after models/user.ts
was added, src
contained both models
and server
directories, recognizing the root directory as src
. So it now became:
dist
├── models
│ └── user.js
└── server
├── config
│ ├── config.js
│ └── prod.env
└── index.js
Notice the directory structure has changed. My entire npm commands
were based as if src/server
was a root directory (as if the index was at dist/index.js
), so that began to cause the error. Therefore I updated the npm commands
. Note that I changed dist
s to dist/server
s.
rm -rf dist
&& tsc
- && cp ./src/server/config/prod.env ./dist/config/prod.env
&& export NODE_ENV=prod
- && node dist
rm -rf dist
&& tsc
+ && cp ./src/server/config/prod.env ./dist/server/config/prod.env
&& export NODE_ENV=prod
+ && node dist/server
To prevent TSC from guessing the root directory, you can add the following line on your tsconfig.json
.
{
"compilerOptions": {
...
"rootDir": "src",
}
}
This line will retain the absolute folder structure from src
.
Let's create a calendar with JavaScript but without any external library. This project is based on my previous internship at Woowa Bros, a unicorn food-delivery startup in Seoul.
GitHub - anaclumos/calendar.js: Vanilla JS Calendar
💡
Don't fix it. Buy a new one. — Rerendering in Front-end
Date
Objectdisplay: grid
will be useful.displayDate
object that represents the displaying month.navigator.js
will change this displayDate
object, and trigger renderCalendar()
function with displayDate
as an argument.renderCalendar()
will rerender with the calendar.prettier
!Prettier helps write clean and neat codes with automatic formatting.
{
"semi": false,
"singleQuote": true,
"arrowParens": "always",
"tabWidth": 2,
"useTabs": false,
"printWidth": 60,
"trailingComma": "es5",
"endOfLine": "lf",
"bracketSpacing": true
}
.prettierrc
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0"
/>
<title>JavaScript Calendar</title>
</head>
<body>
<div id="navigator"></div>
<div id="calendar"></div>
</body>
<script>
// code for rendering
</script>
</html>
index.html
I generated this boilerplate with VS Code.
Since we use Vanilla JavaScript, we don't have access to fancy JSX-style highlighting. Instead, our generated HTML codes will live inside JavaScript String, which doesn't have syntax highlighting or Intellisense. Therefore, let's create a function that tricks VS Code to recognize JavaScript String as HTML Tags.
const html = (s, ...args) => s.map((ss, i) => `${ss}${args[i] || ''}`).join('');
util.js
to be added – screenshot of highlighting
calendar.js
Then we connect calendar.js
and index.html
.
<script src="calendar.js"></script>
index.html
Defining constants will help before writing renderCalendar()
.
const NUMBER_OF_DAYS_IN_WEEK = 7
const NAME_OF_DAYS = [
'sun',
'mon',
'tue',
'wed',
'thu',
'fri',
'sat',
]
const LONG_NAME_OF_DAYS = [
'Sunday',
'Monday',
'Tuesday',
'Wednesday',
'Thursday',
'Friday',
'Saturday',
]
const ACTUAL_TODAY = new Date()
calendar.js
Note that we use NUMBER_OF_DAYS_IN_WEEK
to remove magic numbers inside our code. It can be tough to decipher if we meet a random 7
during a code. Instead, using such constant increases the maintainability of the code.
for (let d = 0; d < NUMBER_OF_DAYS_IN_WEEK; d++) {
// do something
}
If there was a random 7
, who knows if we are iterating through the number of Harry Potter Books?
This code block will be the baseline for our calendar generation. We will pass in the HTML target and day object. today
represents the month being displayed. Thetoday
object will come from navigator,js
. Navigator will return the actual date for the current month and return on the first day of the month for other months.
const renderCalendar = ($target, today) => {
let html = getCalendarHTML(today)
// minify html
html = html.replace(/\n/g, '')
// replace multiple spaces with single space
html = html.replace(/\s{2,}/g, ' ')
$target.innerHTML = html
}
calendar.js
Now, we need four different Date objects for displaying the calendar. We could've used fewer objects, but it is up to the implementation. I think reducing date objects here would cause a minimal performance increase but spike the understandability of the code, so using four objects seems like a fair middle ground.
I made a function that would process these four dates when inputted a specific Date.
const processDate = (day) => {
const month = day.getMonth()
const year = day.getFullYear()
return {
lastMonthLastDate: new Date(year, month, 0),
thisMonthFirstDate: new Date(year, month, 1),
thisMonthLastDate: new Date(year, month + 1, 0),
nextMonthFirstDate: new Date(year, month + 1, 1),
}
}
calendar.js
Recently I came across The Noun Project's API. With the combination of the download
function I created in the past, you could download hundreds of icons within seconds.
Do not use this tool to pirate others' intellectual property. Beware of what you are doing with this code and The Noun Project's API. Read the license and API documents thoroughly. Unauthorized use cases are listed here. This entire post & codes are MIT licensed.
import requests
import os
from tqdm import tqdm
from requests_oauthlib import OAuth1
You will need to pip3 download
if you do not have these libraries.
download
functiondef download(url, pathname):
if not os.path.isdir(pathname):
os.makedirs(pathname)
response = requests.get(url, stream=True)
file_size = int(response.headers.get("Content-Length", 0))
filename = os.path.join(pathname, url.split("/")[-1])
if filename.find("?") > 0:
filename = filename.split("?")[0]
progress = tqdm(
response.iter_content(256),
f"Downloading {filename}",
total=file_size,
unit="B",
unit_scale=True,
unit_divisor=1024,
)
with open(filename, "wb") as f:
for data in progress:
f.write(data)
progress.update(len(data))
This code fetches the URL and saves it as a file at pathname
.
# ---
DOWNLOAD_ITERATION = 3
# Returns 50 icons per iteration.
# Three iteration equals 150 icons.
SEARCH_KEY = "tree" # Search Term
SAVE_LOCATION = "./icons"
auth = OAuth1("API_KEY", "API_SECRET")
# ---
for iteration in range(DOWNLOAD_ITERATION):
endpoint = (
"http://api.thenounproject.com/icons/"
+ SEARCH_KEY
+ "?offset="
+ str(iteration * 50)
)
response = requests.get(endpoint, auth=auth).json()
for icon in response["icons"]:
download(icon["preview_url"], SAVE_LOCATION)
For more advanced uses, please visit this docs page. In addition, you can get your API Key and API secret by registering your app here.
I have run some benchmarks and found that downloading ~5k icons shouldn't be a problem. However, The Noun Project's API has a call limit so beware of that.
*
operator (like — not at all!)First, let's import the math
library.
import math
Let's add some util functions for adding zeros. The following operation is super-expensive, and I did this for the sake of removing *
s.
def addZeros(number: int, zeros: int) -> int:
s = str(number)
for _ in range(zeros):
s += "0"
return int(s)
If you do not care about not using *
s, you can go with:
def addZeros(number: int, zeros: int) -> int:
return number * (10 ** zeros)
Let's say the standard input provides the value in string
, with ,
in between the two numbers. I wrote a wrapper class that parses the standard input and feeds the value into the core method.
def karatsuba(input: str) -> str:
inputList = list(map(str.strip, input.split(',')))
return str(karatsubaCore(int(inputList[0]), int(inputList[1])))
Then we need to finish the actual calculation. For the base calculation (the line after if min(v1, v2) <= 100:
) you could go with v1 * v2
if you don't need to remove *
s.
def karatsubaCore(v1: int, v2: int) -> int:
if min(v1, v2) <= 100:
minv = min(v1, v2)
maxv = max(v1, v2)
ans = 0
for _ in range(minv):
ans += maxv
return ans
else:
n = int(math.log10(max(v1, v2))//2)
a = int(v1 // pow(10, n))
b = int(v1 % pow(10, n))
c = int(v2 // pow(10, n))
d = int(v2 % pow(10, n))
val1 = karatsubaCore(a, c)
val2 = karatsubaCore(b, d)
val3 = karatsubaCore(a+b, c+d) - val1 - val2
return addZeros(val1, n+n) + addZeros(val3, n) + val2
It is always a good idea to have some validation. Unfortunately, I did not use any testing library; this short script will suffice the purpose of validating the answer.
def karatCheck(input: str) -> str:
i = list(map(str.strip, input.split(',')))
# my calculation
karat: int = karatsubaCore(int(i[0]), int(i[1]))
# the correct calculation
correct: int = int(i[0]) * int(i[1])
print("Correct!" if karat == correct else "Itz... Wrong...")
karatCheck("342345,123943")
karatCheck("342345,0")
karatCheck("00342345 , 123943129893493")
karatCheck("12030912342345,1239431000192837812")
karatCheck("2,1239431000192837812")
karatCheck("249302570293475092384,0")
karatCheck(" 100, 100 ")
If you run this, you will get:
Correct!
Correct!
Correct!
Recently I came across the idea of publishing a React App on GitHub Pages. I can distribute my React App using GitHub, further saving server bandwidth and simplifying the API server structure. I have created a boilerplate for this structure.
docs
folder into a small landing page.build
folder./build
to /docs
whenever I build the app, it would work as if I have set up a CI/CD structure."scripts": {
"start": "react-scripts start",
"build": "react-scripts build && rm -rf docs && mv build docs",
"test": "react-scripts test --verbose",
"eject": "react-scripts eject"
},
The yarn build
command will replace the docs folder with a newer build of the app.
So, this blog runs on Ghost. At the footer of this website, I wanted to keep the message "Ghost ${version} self-hosted on DigitalOcean distributed by Cloudflare." But that meant every time I updated Ghost, I had to manually update that string in my theme package and re-upload those. While I automated theme deployment with GitHub Actions (you can find the post here), it was a hassle to ① clone the theme package ② fix the string ③ commit and push it back. Then I thought it would be great to automatically insert the current Ghost version so that I don't have to update it every time manually. At first, I investigated the Ghost engine side to make the Node.js include the value before responding to the client browser, but I figured that there was a much simpler way after a while.
Every Ghost blog includes a tag like the following for SEO and statistical reasons unless you manually disabled it.
<meta name="generator" content="Ghost 3.13">
That content
thing was what I wanted to use. Extract that value with JS.
document.getElementsByName("generator")[0].content;
Of course, if you made some other HTML tag with a name generator
before this generator, this wouldn't work. But you really shouldn't do that – generator
tags should only be used by automatic software and aren't supposed to be edited. So either leave this tag as-is or remove it altogether.
The footer's HTML is generated with a handlebars file.
{
{
{
t "{ghostlink} self-hosted on {cloudlink} distributed by {CDN}"
ghostlink = "<a href = \"https://github.com/TryGhost/Ghost\">Ghost</a>"
cloudlink = "<a href = \"https://www.digitalocean.com/\">DigitalOcean</a>"
CDN="<a href=\"https://www.cloudflare.com/\">Cloudflare</a>"
}
}
}.
I added an id
property to ghostlink
.
ghostlink="<a id = \"ghost-version\" href=\"https://github.com/TryGhost/Ghost\">Ghost</a>"
Then input the string to the corresponding tag with JS.
<script>
document.getElementById("ghost-version").innerText = document.getElementsByName("generator")[0].content;
</script>
Paste this to Admin Panel → Code Injections → Site Footer.
You are good to go. See this in action down at the footer. ↓
One less hard-coded magic number!
The goal is to...
var extensionPage = 'https://chosunghyun.com/youtube-comment-language-filter'
var updateLogPage = 'https://chosunghyun.com/youtube-comment-language-filter/updates'
chrome.runtime.onInstalled.addListener(function (object) {
if (object.reason === 'install') {
chrome.notifications.create(extensionPage, {
title: 'YCLF is now installed 😎',
message: 'Click here to learn more about the extension!',
iconUrl: './images/min-icon128.png',
type: 'basic',
})
} else if (object.reason === 'update') {
chrome.notifications.create(updateLogPage, {
title: 'YCLF updated to v' + chrome.runtime.getManifest().version + ' 🚀',
message: "Click here to check out what's new!",
iconUrl: './images/min-icon128.png',
type: 'basic',
})
}
})
Also available on GitHub
iconUrl
should be the path from manifest.json
to the image file, not from the background script.chrome.runtime.getManifest().version
it to get the version of the extension.background.js
with that given detail. Sending notifications directly from content.js
seems restricted. Check this post for more information.Generally, you would need an event listener for each notification. However, there is a neat way to reduce duplicate codes.
chrome.notifications.onClicked.addListener(function (notificationId) {
chrome.tabs.create({ url: notificationId });
});
The trick is to store the link in notificationId
field and attach an event listener to the notifications. This way, you can only use one event listener to open multiple types of links.
chrome.notifications
section on Google Chrome Developer DocsIt doesn't seem that this is the ultimate answer. While the notification opens up the intended page when the user clicks the notification right after it pops up, the notification does not open up the page on click if the notification is sent to the notification center. This post will be updated if I find a better solution.
If your Ghost CMS blog using Handlebars theme shows published dates in relative time (like Published 11 months ago), you will find a handlebars code like this in your theme file.
<time datetime='{{date format='YYYY-MM-DD'}}'>
{{date published_at timeago='true'}}
</time>
The date published_at timeago="true"
is responsible for the relative time. Change it to this.
<time datetime='{{date format='YYYY-MM-DD'}}'>
{{date published_at format='MMMM DD, YYYY'}}
</time>
This will give something like September 7, 2000.
You can use [moment.js](https://momentjs.com/)
syntax for fine-tuning the details.
<!-- 2000 September 07 9:00:00 PM -->
<time datetime='{{date format='YYYY-MM-DD hh:mm:ss A'}}'>
{{date published_at format='YYYY MMMM DD hh:mm:ss A'}}
</time>
<!-- 2000 09 07 9:00 PM -->
<time datetime='{{date format='YYYY-MM-DD hh:mm A'}}'>
{{date published_at format='YYYY MM DD hh:mm A'}}
</time>
<!-- 2000 09 07 21:00 -->
<time datetime='{{date format='YYYY-MM-DD hh:mm'}}'>
{{date published_at format='YYYY MM DD hh:mm'}}
</time>
For months, use MM
for short notations (like 09) and MMMM
for more extended notations (like September.) The basic syntax is for hours, minutes, seconds, and AM/PM if you want to display time. For example, I am using the following.
<time datetime='{{date format='YYYY-MM-DD h:mm A'}}'>
{{date published_at format='YYYY/MM/DD h:mm A'}}
</time>
import os
import math
def getFileSize(path, kilo=1024, readable=False, shortRead=False):
size = 0
sizeArr = []
units = ["B", "KB", "MB", "GB", "TB", "PB", "EB"]
if os.path.isdir(path):
for dirpath, dirnames, filenames in os.walk(path):
for i in filenames:
size += os.path.getsize(os.path.join(dirpath, i))
elif os.path.isfile(path):
size += os.path.getsize(path)
unit = math.floor(math.log(size, kilo))
for k in range(0, unit + 1):
sizeArr.append(
math.floor((size % kilo ** (k + 1)) / kilo ** k)
)
if readable:
sizeString = ""
if not shortRead:
for x in range(unit, -1, -1):
sizeString += str(sizeArr[x]) + units[x] + " "
return sizeString[:-1]
else:
return (
str(sizeArr[-1])
+ "."
+ str(math.floor(sizeArr[-2] / 1.024))
+ units[len(sizeArr) - 1]
)
else:
return sizeArr
C:\Users\anacl\OneDrive\Documents (Folder)
: 3.13GB (3,366,343,239 Bytes)C:\Users\anacl\OneDrive\Pictures (Folder)
: 83.4MB (87,468,781 Bytes)C:\Users\anacl\OneDrive\Pictures\screenshot.png
(File): 139KB (143,262 Bytes)print(getFileSize("C:\\Users\\anacl\\OneDrive\\Documents"))
print(getFileSize("C:\\Users\\anacl\\OneDrive\\Pictures"))
print(
getFileSize(
"C:\\Users\\anacl\\OneDrive\\Pictures\\screenshot.png"
)
)
# Expected Output
# [583, 404, 138, 3]
# [749, 426, 83]
# [926, 139]
Each element in the returned list
is the value of [B, KB, MB, GB, ...] of the file size.
print(
getFileSize(
"C:\\Users\\anacl\\OneDrive\\Documents", readable=True
)
)
print(
getFileSize("C:\\Users\\anacl\\OneDrive\\Pictures", readable=True)
)
print(
getFileSize(
"C:\\Users\\anacl\\OneDrive\\Pictures\\screenshot.png",
readable=True,
)
)
# Expected Output
# 3GB 138MB 404KB 583B
# 83MB 426KB 749B
# 139KB 926B
print(
getFileSize(
"C:\\Users\\anacl\\OneDrive\\Documents",
readable=True,
shortRead=True,
)
)
print(
getFileSize(
"C:\\Users\\anacl\\OneDrive\\Pictures",
readable=True,
shortRead=True,
)
)
print(
getFileSize(
"C:\\Users\\anacl\\OneDrive\\Pictures\\screenshot.png",
readable=True,
shortRead=True,
)
)
# Expected Output
# 3.134GB
# 83.416MB
# 139.904KB
While Chrome and Firefox are two very different browsers, Chrome Extension and Firefox Add-on are now more similar than ever. Therefore, it is possible to transplant a Chrome extension to a Firefox Add-on and publish it to the Mozilla store with minor changes. This post is how I transplanted my YouTube Comment Language Filter to Firefox.
First of all, Firefox can run commands with chrome
namespace, such as chrome.tabs.onUpdated
. However, there are still a few codes that Firefox cannot run. Firefox offers a handy website to check the chrome incompatibilities.
chrome://extensions
..crx
file.If there is any problem, I advise you to visit the MDN docs and see what code caused the problem. I didn't have any problem, so I cannot share any experience here.
Firefox also requires an ID inside the manifest.json
file. It is like the following.
"browser_specific_settings": {
"gecko": {
"id": "addon@example.com",
"strict_min_version": "42.0"
}
},
As you can see, you can also add a strict_min_version
here. See original MDN docs.
This was a minor hassle since Chrome could not recognize the above code block. So you need to keep two manifest.json
, one with the above code block (for Firefox) and one without it (for Chrome). If I find a more straightforward way, I will add it here.
One little tip: make sure you don't include any unnecessary files .DS_Store
or anything like that. Using macOS's default Finder compressor will sometimes have these files. I recommend using Keka.
I recently found this:
This is just a thought. But it might be nice to have some sort
of easter egg message in here for the hard core Apple fans that
will stop the video.
01010011 01101111 00100000 01111001 01101111 01110101
00100000 01110100 01101111 01101111 01101011 00100000
01110100 01101000 01100101 00100000 01110100 01101001
01101101 01100101 00100000 01110100 01101111 00100000
01110100 01110010 01100001 01101110 01110011 01101100
01100001 01110100 01100101 00100000 01110100 01101000
01101001 01110011 00111111 00100000
01010111 01100101 00100000 01101100 01101111 01110110
01100101 00100000 01111001 01101111 01110101 00101110
So I made a short script.
egg = '''
01010011 01101111 00100000 01111001 01101111 01110101
00100000 01110100 01101111 01101111 01101011 00100000
01110100 01101000 01100101 00100000 01110100 01101001
01101101 01100101 00100000 01110100 01101111 00100000
01110100 01110010 01100001 01101110 01110011 01101100
01100001 01110100 01100101 00100000 01110100 01101000
01101001 01110011 00111111 00100000
01010111 01100101 00100000 01101100 01101111 01110110
01100101 00100000 01111001 01101111 01110101 00101110
'''.split()
for e in egg:
print(chr(int(e,2)), end="")
print()
It said...
So you took the time to translate this? We love you.
The only thing I missed about my Windows computer was locking the screen since I was so used to locking my computer with ⊞Win+L. Mac offered an alternative called the Hot Corner. But it never was so intuitive and fast as pressing ⊞Win+L. However, Mac now supports locking the computer by pressing ⌘Command+L from macOS Mojave.
2. Go to Keyboard → Shortcuts → App Shortcuts and press + at the bottom.
3. On your Menu Bar, press the Apple Logo. Check the name of ^Control+⌘Command+Q. It is responsible for locking your Mac. Remember the name of the menu.
4. Go back to the Preference app. Select All Applications at Application Setting. Next, enter the Menu Title you have just checked. This title will vary according to your macOS Language Preference. Finally, enter ⌘Command+L at Keyboard Shortcut. You can enter it here if you want to lock your Mac with Keyboard Shortcut other than ⌘Command+L. Press Add if you are finished.
5. Now you can see that the same Lock Screen option at your menu bar will show the Keyboard Shortcut you just changed.
This method works in almost every case. Sometimes, the app will have ⌘Command+L as its Keyboard Shortcut. One example is the System Preferences app that uses the ⌘Command+L as going to the Lobby of System Preferences. I have never seen any other cases where ⌘Command+L doesn't work as expected.
I recently came across a Korean font called Spoqa Han Sans. It attracted me due to its simplicity and readability. However, I didn't like its English glyph (based on Noto Sans, which I didn't love to death.)
After a while, I figured out how to define a language range for each font. While importing a font face, we need to add unicode-range
.
@font-face {
font-family: 'Spoqa Han Sans';
/* Omitted */
unicode-range: U+AC00-D7AF;
}
U+AC00-D7AF
is the Unicode range of Korean glyphs.
@font-face
?We add either of the following statements to define a font in most cases.
<link href="//spoqa.github.io/spoqa-han-sans/css/SpoqaHanSans-kr.css" rel="stylesheet" type="text/css" />
@import url(//spoqa.github.io/spoqa-han-sans/css/SpoqaHanSans-kr.css);
This way has the advantage that I do not need to care about updating the font. It automatically updates when the font is updated at the font server (Of course, this could also be a disadvantage in some cases). But we cannot define unicode-range
in this case. So rather than importing the scripts like above, we could copy & paste the actual script itself. Access the URL provided in your @import
statement with https:
in front of it, and you will find the license of the font and its @font-face
statements.
Copy this value and add them to your CSS, then define the unicode-range
. Note that this method will not automatically update your font, but you can always recopy & repaste when the @font-face
in your @font-face
gets updated.
Ghost opens every external URL on the same page by default. This behavior distracts the user and increases the bounce rate. While it seems that there is no built-in option on Ghost to open external links in a new tab, we can fix this by injecting a short snippet of code.
<script>
var links = document.querySelectorAll('a');
for (var i = 0; i < links.length; i++) {
if (links[i].hostname != window.location.hostname) {
links[i].target = '_blank';
links[i].rel = 'noopener';
}
}
</script>
Paste this code at Ghost Settings → Code injection → Site Footer.
Contain every link in an array. For every link, if the link's hostname is different from this Ghost's hostname, change the target
and rel
values.
Changing the target
to _blank
will do the job. However, this will run the external link in the same Ghost process, leading to possible performance drops and security risks. We can prevent this by setting the rel
value as noopener
.
While modifying every link with JavaScript whenever accessing the page might slow down your Ghost, the performance impact will be ignorable unless the page has many external links. This trick will do its job until Ghost provides us a default option to open links in a new tab.
MinsaPay is a payment system that was built for the Minjok Summer Festival. It works like a prepaid tap-to-pay card. Every source and piece of anonymized transaction data is available on GitHub.
Number of Users: ~400
Number of Transactions: ~2900
Total Payment Amount: ~$4,000 USD (₩4,604,210 KRW)
Total Transaction Amount: ~$14,400 USD (₩17,319,300 KRW)
GitHub - minsapay/server: Payment server & web app for KMLA Minjok Festival (Summer Festival)
GitHub - minsapay/transaction-data-2019: Transaction Data of MinsaPay 2019
My high school, Korean Minjok Leadership Academy (KMLA), had a summer festival like any other school. Students opened booths to sell food and items they created. We also screened movies produced by our students and hosted dance clubs. The water party in the afternoon is one of the festival's oldest traditions.
Because there were a lot of products being sold, it was hard to use regular paper money (a subsequent analysis by the MinsaPay team confirmed that the total volume of payments reached more than $4,000). So our student council created proprietary money called the Minjok Festival Notes. The student council had a dedicated student department act as a bank to publish the notes and monitor the currency's flow. Also, the Minjok Festival Notes acted as festival memorabilia since each year's design was unique.
The Minjok Festival Note design for 2018 had photos of the KMLA student council members at the center of the bill. The yellow one was worth approximately $5.00, the green one was worth $1.00, and the red one was worth 50 cents.
But there were problems. First, it was not eco-friendly. Thousands of notes were printed and disposed of annually for just a single day. It was a waste of resources. The water party mentioned above was problematic as well. The student council made Minjok Festival Notes out of nothing special, just ordinary paper. That made the notes extremely vulnerable to water, and students lost a lot of money after the water party. Eventually, the KMLA students sought a way to resolve all of these issues.
The student council first offered me the chance to develop a payment system. Because I had thought about the case beforehand, I thought it made a lot of sense. I instantly detailed the feasibility and possibilities of the payment system. But even after designing the system in such great detail that I could immediately jump into the development, I turned down the offer.
I believe in the social responsibilities of the developer. Developers should not be copy-pasters who meet the technical requirements and deliver the product. On the contrary, they are the people with enormous potential to open an entirely new horizon of the world by conversing with computers and other technological media. Therefore, developers have started to possess the decisive power to impact the daily lives of the rest of us, and it is their bound responsibility to use that power to enhance the world. That means developers should understand how impactful a single line of code can be.
Of course, I was tempted. But I had never done a project where security was the primary interest. It was a considerable risk to start with a project like this without any experience or knowledge in security. Many what-ifs flooded my brain. What if a single line of code makes the balance disappear? What if the payment record gets mixed up? What if the server is hacked? More realistically, what if the server goes down?
People praise audacity, but I prefer prudence. Bravery and arrogance are just one step apart. A financial system should be flawless (or as flawless as possible). It should both be functional and be performing resiliently under any condition. It didn't seem impossible. But it was too naïve to believe nothing would happen, as I was (and am still) a total newbie in security. So I turned it down.
The student council still wanted to continue the project. I thought they would outsource the task to some outside organization. It sounded better since they would at least have some degree of security. But the council thought differently. They were making it themselves with Google Forms.
When I was designing the system, the primary issue was payment authorization. The passcode shouldn't be shared with the merchant, while the system could correctly authorize and process the order. The users can only use the deposited money in their accounts. This authorization should happen in real-time. But I couldn't think of a way to nail the real-time authorization with Google Forms. So I asked for more technical details from one student council member. The idea was as follows:
So the idea was to use the Google Form's unique address as a password. Since the merchants are supposed to use incognito mode, there should be a safety layer to protect the user's Google Form address (in theory). They will need to make a deferred payment after the festival. But as a developer, this approach had multiple problems:
It could work in an ideal situation. But it will accompany a great deal of confusion and entail a noticeable discomfort on the festival day. That made me think that even though my idea had its risks, mine would still be better. So, I changed my mind.
Fortunately, I met a friend with the same intent—our vision and idea about the project aligned. I explained my previous concept, and we talked to each other and co-developed the actual product. We also met at a cafe several times. I set up and managed the DNS and created the front-end side. Below are the things we thought about while making the product.
We disagreed on two problems.
We chose web and RFID, conceding one for each. I agreed to use RFID after learning that using a camera to read bar codes wasn't that fast or efficient.
Main Home, Admin Page, and Balance Check Page of the product.
Remember that one of the concerns was about the server going down?
On the festival day, senior students had to self-study at school. Then at one moment, I found my phone had several missed calls. The server went down. I rushed to the festival and sat in a corner, gasping and trying to find the reason. Finally, I realized the server was intact, but the database was not responding.
It was an absurd problem. (Well, no problem is absurd, per se, but we couldn't hide our disappointment after figuring out the reason.) We thought the free plan would be more than enough when we constructed our database. However, the payment requests surged and exceeded the database free tier. So we purchased a $9.99 plan, and the database went back to work. It was one of the most nerve-wracking events I ever had.
The moment of upgrading the database plan. $10 can cause such chaos!
While the server was down, each booth made a spreadsheet and wrote down who needed to pay how much. Afterward, we settled the problem by opening a new booth for making deferred payments.
The payment log showed that the server went down right after 10:17:55 AM and returned at 10:31:10 AM. It was evident yet intriguing that the payments made per minute were around 10 to 30 before the crash but went down to almost zero right after restoring the server. If you are interested, please look here.
Due to exceeding the database free tier, the server went down for 13 minutes and 15 seconds after payment #1546.
The entire codebase for MinsaPay is available on GitHub. First, though, I must mention that I still question the integrity of this system. One developer reported a security flaw that we managed to fix before launch. However, the system has unaddressed flaws; for example, though unlikely, merchants can still copy RFID values and forge ID cards.
I wanted to give students studying data analysis more relatable and exciting data. Also, I wanted to provide financial insights for students planning to run a booth the following year. Therefore, we made all payment data accessible.
However, a data privacy problem arose. So I wrote a short script to anonymize personal data. If a CSV file is provided, it will anonymize a selected column. Identical values will have the same anonymized value. You can review the anonymized data here.
I strongly recommend thoroughly auditing the entire code or rewriting it if you use this system. MinsaPay is under the MIT license.
There is ample room for improvement.
First, there are codes with numerous compromises. For example, we made a lot of trade-offs not to miss the product deadline (the festival day). We also wanted to include safety features, such as canceling payments, but we didn't have time. More time and development experience would have improved the product.
Since I wasn't comfortable with the system's security, I initially kept the repository quiet and undisclosed. Afterward, however, I realized this was a contradiction, as I knew that security without transparency is not the best practice.
Also, we were not free from human errors. For example, RFID values were long strings of digits, and there were a few mistakes that someone would input in the charge amount, making the charge amount something like Integer.MAX_VALUE
. We could've added a simple confirmation prompt, but we didn't know the mistakes would happen at that time.
In hindsight, it was such a great experience for me, who had never done large-scale real-life projects. I found myself compromising even after acknowledging the anti-patterns. I also understood that knowing and doing are two completely different things since knowing has no barriers, but doing accompanies extreme stress both in time and environment.
Still, it was such an exciting project.
Lastly, I want to thank everyone who made MinsaPay possible.
Thank you!
👋
This post was written when I had little experience with Node.js. This is not an advisable way. This post simply serves as a departure post for my journey.
I have used AWS Elastic Beanstalk for a while and figured Heroku has several advantages over AWS. So I have migrated my AWS EB app called KMLA Forms to Heroku. For your information, KMLA Forms is a web app that simplifies writing necessary official documents in my school, KMLA.
Few advantages I found:
I had to make only minimal changes to app.js and package.json.
// ...
http.createServer(app).listen(8081, '0.0.0.0')
console.log('Server up and running at http://0.0.0.0:8081')
// ...
const port = process.env.PORT || 8000
// ...
app.listen(port, () => {
console.log('App is running on port ' + port)
})
Also, I have added "start": "node app.js"
in package.json
. Codes are on GitHub, and the web is launched here.