<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <title>1-1sam</title>
    <subtitle>Sam Young's personal webpage.</subtitle>
    <link href="https://1-1sam.codeberg.page/feed/atom.xml" rel="self"/>
    <link href="https://1-1sam.codeberg.page/pages/"/>
    <id>https://1-1sam.codeberg.page/pages/</id>
    <updated>2025-12-24T18:00:00Z</updated>
    <entry>
        <title>newver</title>
        <link href="/pages/pages/posts/newver.html"/>
        <updated>2025-12-24T18:00:00Z</updated>
        <summary></summary>
        <author><name>Samuel Young</name></author>
        <content type="html">

&lt;h1&gt;newver&lt;/h1&gt;
&lt;p&gt;I have recently released my newest project
&lt;a href="https://metacpan.org/dist/App-newver/view/bin/newver"&gt;newver&lt;/a&gt;, a utility for
scanning upstream web page's for new software versions. If you have experience
with Debian's
&lt;a href="https://manpages.debian.org/trixie/devscripts/uscan.1.en.html"&gt;uscan&lt;/a&gt;,
&lt;strong&gt;newver&lt;/strong&gt; is similar to that but not designed around a speciifc packaging
scheme. The idea came to me after playing around with &lt;strong&gt;uscan&lt;/strong&gt; and wishing
there was a similar tool to that could be used for non-Debian packages. I
previously used to use the &lt;a href="https://repology.org/"&gt;repology&lt;/a&gt; API to get informed
about new software updates, but I didn't like the idea of relying on a
3rd-party service that itself relied on other package repositories that may not
be completely up-to-date or accurate.&lt;/p&gt;
&lt;p&gt;If you are interested, you can install &lt;strong&gt;newver&lt;/strong&gt; via CPAN:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-bash"&gt;cpanm App::newver
# or
cpan App::newver
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Thank you, and Merry Christmas!&lt;/p&gt;



        </content>
    </entry>
    <entry>
        <title>noss 2.00</title>
        <link href="/pages/pages/posts/noss-200.html"/>
        <updated>2025-11-12T18:00:00Z</updated>
        <summary></summary>
        <author><name>Samuel Young</name></author>
        <content type="html">

&lt;h1&gt;noss 2.00&lt;/h1&gt;
&lt;p&gt;I have recently released version 2.00 of my RSS feed reader &lt;strong&gt;noss&lt;/strong&gt;. You can find
the project's homepage and repository
&lt;a href="https://codeberg.org/1-1sam/noss"&gt;here on Codeberg&lt;/a&gt;. &lt;strong&gt;noss&lt;/strong&gt; is also available
on &lt;a href="https://metacpan.org/release/SAMYOUNG/WWW-Noss-2.00"&gt;CPAN&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The big feature of this release that I believed was worthy of bumping &lt;strong&gt;noss&lt;/strong&gt;
to major version 2 was the addition of colored output. This should hopefully
make the output of &lt;strong&gt;noss&lt;/strong&gt; prettier and easier to understand. And for those of
you who hate color, you can disable it via the &lt;code&gt;colored_output&lt;/code&gt; configuration
setting or the &lt;code&gt;--no-color&lt;/code&gt; CLI flag. There are also a lot more minor nifty
features and improvements introduced in this release, a complete list of which
can be found in the &lt;code&gt;Changes&lt;/code&gt; file.&lt;/p&gt;
&lt;p&gt;In other news, I've been in communication with some Debian developers who are
working on packaging &lt;strong&gt;noss&lt;/strong&gt; for the Debian repos! It's crazy to think that
software I wrote is being officially packed for Debian. Thank you to the Debian
developers for their hard work :-).&lt;/p&gt;



        </content>
    </entry>
    <entry>
        <title>LLM Poison</title>
        <link href="/pages/pages/posts/llm-poison.html"/>
        <updated>2025-11-03T18:00:00Z</updated>
        <summary></summary>
        <author><name>Samuel Young</name></author>
        <content type="html">

&lt;h1&gt;LLM Poison&lt;/h1&gt;
&lt;p&gt;After being inspired by
&lt;a href="https://maurycyz.com/misc/the_cost_of_trash/"&gt;this blog post&lt;/a&gt;, I've set up my
own humble little LLM poison trap on this site. It's not as sophisticated as the
one you'll find in the page linked prior because of the limitations of
Codeberg's static page hosting, but it was still a fun project to do. A link
to the pages can be found &lt;a href="/llm/1.html"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I implemented it by writing a Perl script that reads text from input, builds
a Markov tree of said input, then outputs a Markov-chain-generated output. I
used an aggregation of Slackware's &lt;code&gt;fortune&lt;/code&gt; databases, Moby-Dick,
Three Men In A Boat, and Pudd'nhead Wilson as the source input text. Another
Perl script then takes these generated texts and converts them to HTML pages
which all link to each other, so that LLM scrapers will get &amp;quot;stuck&amp;quot; hopping
between these pages.&lt;/p&gt;
&lt;p&gt;It was a fun project! I don't know how effective it actually will be
considering its just 30 different static web pages, and Codeberg works hard to
block LLM scrapers from scraping its services, but it feels nice to do my part
in combatting the scourge of evil, incompetant scrapers taking over the web :-).&lt;/p&gt;
&lt;h2&gt;Links&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://maurycyz.com/misc/the_cost_of_trash/"&gt;https://maurycyz.com/misc/the_cost_of_trash/&lt;/a&gt; - Article that inspired me to
pursue this endeavor.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Markov_chain"&gt;https://en.wikipedia.org/wiki/Markov_chain&lt;/a&gt; - Wikipedia article for Markov
chains.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.dcode.fr/markov-chain-text"&gt;https://www.dcode.fr/markov-chain-text&lt;/a&gt; - Online Markov chain text generator
you can use to get an idea of what generated text will look like.&lt;/li&gt;
&lt;/ul&gt;



        </content>
    </entry>
    <entry>
        <title>My site now has an Atom feed</title>
        <link href="/pages/pages/posts/atom.html"/>
        <updated>2025-10-27T19:00:00Z</updated>
        <summary></summary>
        <author><name>Samuel Young</name></author>
        <content type="html">

&lt;h1&gt;My Site Now Has an Atom Feed&lt;/h1&gt;
&lt;p&gt;Despite my affinity for RSS, I have never actually gotten around to creating
a feed for my own site. I feel some may find it strange that someone maintains
an RSS feed reader yet doesn't even have a feed of their own, so I have finally
corrected that. A link to the feed can be found &lt;a href="/feed/atom.xml"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I would also like to use this opportunity to recommend some feeds I enjoy
reading. Most of them will be primarily related to computing and programming.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://andrewkelley.me/"&gt;Andrew Kelley&lt;/a&gt; (&lt;a href="https://andrewkelley.me/rss.xml"&gt;feed&lt;/a&gt;) - Creator of the Zig programming language. Interesting programming and
computing culture articles.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.slackware.nl"&gt;AlienBob&lt;/a&gt; (&lt;a href="https://blog.slackware.nl/feed/"&gt;feed&lt;/a&gt;) -
Polific Slackware contributor. Mostly writes about Slackware.&lt;/li&gt;
&lt;li&gt;&lt;a href="http://briandfoy.github.io/"&gt;briandfoy&lt;/a&gt; (&lt;a href="https://briandfoy.github.io/feed.xml"&gt;feed&lt;/a&gt;) -
Perl writer. Programming and Perl articles.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://drewdevault.com"&gt;Drew DeVault&lt;/a&gt; (&lt;a href="https://drewdevault.com/blog/index.xml"&gt;feed&lt;/a&gt;) -
Developer known for his work on Wayland. Interesting programming and culture
articles.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.gingerbill.org/article/"&gt;GingerBill&lt;/a&gt; (&lt;a href="https://www.gingerbill.org/article/index.xml"&gt;feed&lt;/a&gt;) - Lead developer of the Odin programming language. Interesting articles about
programming.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.jwz.org/blog/"&gt;jwz&lt;/a&gt; (&lt;a href="https://cdn.jwz.org/blog/feed/"&gt;feed&lt;/a&gt;) -
Legendary hacker known for his work on NetScape. Writes about whatever he feels
like.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://lwn.net/Articles/"&gt;LWN&lt;/a&gt; (&lt;a href="https://lwn.net/headlines/Features"&gt;feed&lt;/a&gt;) -
Linux Weekly News. Weekly articles about Linux and open-source projects.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.marginalia.nu/log/"&gt;Marginalia&lt;/a&gt; (&lt;a href="https://www.marginalia.nu/log/index.xml"&gt;feed&lt;/a&gt;) -
Creator of the &lt;a href="https://marginalia-search.com/"&gt;Marginalia Search Engine&lt;/a&gt;. Primarily
writes about their experience developing their search engine.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.nayuki.io/"&gt;Project Nayuki&lt;/a&gt; (&lt;a href="https://www.nayuki.io/rss20.xml"&gt;feed&lt;/a&gt;) -
Programmer and Patchouli zealot. Interesting articles about programming and
mathematics.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://nrk.neocities.org/"&gt;NRK&lt;/a&gt; (&lt;a href="https://nrk.neocities.org/rss.xml"&gt;feed&lt;/a&gt;) -
C developer of Suckless-like projects. Interesting articles about C programming.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://rachelbythebay.com/w/"&gt;rachelbythebay&lt;/a&gt; (&lt;a href="https://rachelbythebay.com/w/atom.xml"&gt;feed&lt;/a&gt;) -
Tech articles and sysadmin war stories.&lt;/li&gt;
&lt;li&gt;&lt;a href="http://ratfactor.com/"&gt;ratfactor&lt;/a&gt; (&lt;a href="https://ratfactor.com/atom.xml"&gt;feed&lt;/a&gt;) -
Fascinating tech explorations and programming articles. My personal favorite
site :-).&lt;/li&gt;
&lt;li&gt;&lt;a href="https://unixdigest.com"&gt;Unix Digest&lt;/a&gt; (&lt;a href="https://unixdigest.com/feed.rss"&gt;feed&lt;/a&gt;) -
Articles about Unix (primarily BSD).&lt;/li&gt;
&lt;/ul&gt;



        </content>
    </entry>
    <entry>
        <title>quixftp</title>
        <link href="/pages/pages/posts/quixftp.html"/>
        <updated>2025-10-26T19:00:00Z</updated>
        <summary></summary>
        <author><name>Samuel Young</name></author>
        <content type="html">

&lt;h1&gt;quixftp&lt;/h1&gt;
&lt;p&gt;&lt;img src="/img/don-quixote.png" alt="quixftp"&gt;&lt;/p&gt;
&lt;p&gt;I have recently released the first version of my latest project,
&lt;a href="https://codeberg.org/1-1sam/quixftp"&gt;quixftp&lt;/a&gt;. It is a simple, zero (or more ;-)
configuration FTP server designed for quickly sharing files over the network.
It focuses on ease-of-use and convenience over robustness and security; it's
designed for simple file-sharing rather than serving as a fully-fledged FTP
server like vsfptd.&lt;/p&gt;
&lt;p&gt;This is my first &amp;quot;serious&amp;quot; project in Go, and so far I have enjoyed my
experience with Go :-). It feels a lot like C but with all the annoying
headache-inducing parts removed (manual memory management, hacky string
handling, a crappy standard library, etc.). I hope to do more projects with it
in the future!&lt;/p&gt;
&lt;p&gt;Also, it's been a year since my last blog post. I find writing long-form blog
posts like those that I am a fan of reading to be quite difficult and drain a lot of
my energy which I would rather use for other things, which is why it has taken
me so long to write another blog post. I think in the future, I'll stick to
short-form writing and maybe someday I'll graduate to longer posts.&lt;/p&gt;



        </content>
    </entry>
    <entry>
        <title>New(ish) Site</title>
        <link href="/pages/pages/posts/site-generator.html"/>
        <updated>2024-12-26T18:00:00Z</updated>
        <summary></summary>
        <author><name>Samuel Young</name></author>
        <content type="html">

&lt;h1&gt;New(ish) Site&lt;/h1&gt;
&lt;p&gt;Just recently, I have switched my site over from using handwritten HTML to HTML
generated from Markdown files using a Raku script. You can look at what the new
site's structure looks like on its
&lt;a href="https://codeberg.org/1-1sam/pages"&gt;Codeberg page&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The script I wrote for generating HTML pages is ugly and not very versatile, its
designed to exclusively generate this site, so I wouldn't recommend you use it
yourself. Also, the duck image is gone, sorry :-(. Maybe one day it will return,
when I can figure out how to get it formatted correctly. For the time being,
I'll just leave the duck here.&lt;/p&gt;
&lt;p&gt;&lt;img src="/img/duck.png" alt="Duck"&gt;&lt;/p&gt;
&lt;p&gt;The use of Markdown should make writing and maintaining web pages much easier.
Maybe I'll start using this site more often. Probably not. Who knows.&lt;/p&gt;
&lt;p&gt;Anyways, I thought this was notable enough to write a quick post about. That's
all.&lt;/p&gt;
&lt;h2&gt;Edit: 2/1/2025&lt;/h2&gt;
&lt;p&gt;The duck is back :-).&lt;/p&gt;



        </content>
    </entry>
    <entry>
        <title>Manuals for Raku Programs</title>
        <link href="/pages/pages/posts/raku-manual.html"/>
        <updated>2024-12-12T18:00:00Z</updated>
        <summary></summary>
        <author><name>Samuel Young</name></author>
        <content type="html">

&lt;h1&gt;Manuals for Raku Programs&lt;/h1&gt;
&lt;p&gt;In this post, I will detail my strategy for distributing manuals with Raku
programs, especially those which you wish to distribute via Zef.&lt;/p&gt;
&lt;h2&gt;TLDR&lt;/h2&gt;
&lt;p&gt;Write your manual in Pod, then include a &lt;code&gt;--help&lt;/code&gt; option that does&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;use Pod::To::Text;

multi sub MAIN(Bool :h(:$help)!) {

    put pod2text $=pod;

}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;The Problem&lt;/h2&gt;
&lt;p&gt;Manuals (or manpages) are a form of documentation that have been standard in Unix
operating systems since ancient times. I believe two reasons manpages have stood
the test of time are the following:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Detail&lt;/strong&gt;: Manpages typically contain all the information necessary to
use a program (or they should, at least ;-).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Accessibility&lt;/strong&gt;: Manpages can be immediately accessed via the command
line using &lt;code&gt;man&lt;/code&gt; command. Convenient and no network required!&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Despite this, there are two issues with traditional manpages.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Portability&lt;/strong&gt;: Systems like Windows don't believe in manpages.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Language Support&lt;/strong&gt;: Some programming languages don't support
distributing/generating manpages very well.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;These are unfortunately problems you will run into when trying to distribute
a manual for a cross-platform Raku program. The conventional way to distribute
documentation for a Raku program is to either put the manual in its backend module's Pod or
in the README. Neither of these are ideal either, primarily because of
accessibility concerns. If you put it in program's module Pod, the user will
have to know how to access it. This isn't an issue if you're a Raku dev who
knows about &lt;strong&gt;rakudoc&lt;/strong&gt;, but is an issue if you aren't in the know and just
want to read the documentation. The alternative, to put it in the README, is
just as bad, as the README is not installed with the distribution, meaning the
user will have to go out of their way to find and save the README themselves.&lt;/p&gt;
&lt;h2&gt;The Solution&lt;/h2&gt;
&lt;p&gt;Many programs have the option to be ran with the &lt;code&gt;--help&lt;/code&gt; option,
which will print out a help message that details the usage of the program
and its options. The help message is usually meant to just be a quick
synopsis on the usage of the program, as opposed to the manual which contains
much more comprehensive documentation. However, many programs have their
help option bring up their manual (exiftool, systemctl, off the top of my head),
so what if we did something similar; have a &lt;code&gt;--help&lt;/code&gt; option that
prints out a manual? By combining the manual and help message, we have a manual
that is:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Accessible: Can be read from the program itself.&lt;/li&gt;
&lt;li&gt;Detailed: We can choose the level of detail to write our help message.&lt;/li&gt;
&lt;li&gt;Portable: Built into the program itself; if the program be ran, it can be read.&lt;/li&gt;
&lt;li&gt;Supported: Raku provides a simple way to implement this&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;use Pod::To::Text;

multi sub MAIN(Bool :h(:$help)!) {

    put pod2text $=pod;

}

=begin pod

=head1 NAME
...
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;What this code does is render the Pod in the file it is located in and print
it out. It accomplishes this by using &lt;code&gt;pod2text&lt;/code&gt;, which is exported
by Pod::To::Text. Pod::To::Text is a core module provided by Raku, so
portability is not a concern here. &lt;code&gt;$=pod&lt;/code&gt; is the variable holding
your file's Pod structure, it's guarenteed to be there as long as you have Pod.
Then you just have to write your manual in Pod.&lt;/p&gt;
&lt;p&gt;I also like generating a README from my manuals, which I do via the
Pod::To::Markdown module. That way, my README and manual have the same
documentation, but I only need to maintain a single file.&lt;/p&gt;
&lt;p&gt;One issue with this approach is that you won't be able to use special Pod
comments (&lt;code&gt;#|&lt;/code&gt;, &lt;code&gt;#=&lt;/code&gt;). It's not a problem for me because
I never liked using them anyway, but YMMV.&lt;/p&gt;
&lt;p&gt;If you'd like to see this trick used in practice, you can view the following
programs of mine:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://codeberg.org/1-1sam/raku-ebread"&gt;ebread&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://codeberg.org/1-1sam/stouch"&gt;stouch&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;That's all. I've been trying to come up with a good idea for a new post for
a while but nothing could come to mind. I realized that manuals seems to be a
relatively untapped topic in Raku, so I thought this post might be helpful to
some folks.&lt;/p&gt;
&lt;p&gt;Merry Christmas and Happy Holidays :-)&lt;/p&gt;



        </content>
    </entry>
    <entry>
        <title>My (Inexperienced) Experience with CompTIA Certs</title>
        <link href="/pages/pages/posts/comptia.html"/>
        <updated>2024-10-17T19:00:00Z</updated>
        <summary></summary>
        <author><name>Samuel Young</name></author>
        <content type="html">

&lt;h1&gt;My (Inexperienced) Experience with CompTIA Certs&lt;/h1&gt;
&lt;p&gt;This posts details my experience studying and taking the following CompTIA
certification tests:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;CompTIA A+ 220-1101&lt;/li&gt;
&lt;li&gt;CompTIA A+ 220-1102&lt;/li&gt;
&lt;li&gt;CompTIA Network+ N10-008&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I should probably note that I do not have actually have any professional IT
experience (I am working on changing that :-). I just thought that perhaps
some folks out there studying for these tests might like the perspective of
a newbie like me that they can relate to.&lt;/p&gt;
&lt;h2&gt;Time&lt;/h2&gt;
&lt;p&gt;Each of these tests took me roughly two months each to study for. I passed each
of them on my first try with the following scores:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;220-1101: 746 (675 required)&lt;/li&gt;
&lt;li&gt;220-1102: 768 (700 required)&lt;/li&gt;
&lt;li&gt;N10-008:  781 (720 required)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;While I was studying for each test, I was in high school. However, I had a lot
time of downtime in my classes so that combined with the fact I didn't
participate in many extra-curricular activies meant I had a decent bit of study
time. If I had to guess how many hours I spent studying a day, it would probably
be around three hours? I would study every day, only taking breaks if I got sick
or I was particularly busy.&lt;/p&gt;
&lt;h2&gt;Studying&lt;/h2&gt;
&lt;p&gt;Here is a list of the study material I used for each test and how much it costs:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;CompTIA Exam Objectivies - Free&lt;/li&gt;
&lt;li&gt;Professor Messer Video Course - Free on YouTube&lt;/li&gt;
&lt;li&gt;Professor Messer Tests (no Network+) - $30&lt;/li&gt;
&lt;li&gt;Jason Dion's Practice Tests - $10&lt;/li&gt;
&lt;li&gt;Pearson ExamCram Book - Free from library&lt;/li&gt;
&lt;li&gt;Google - Free&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As you can see, I did not have to splurge on study material. Along with the
above material, I also did some hands-on stuff which I will also detail.&lt;/p&gt;
&lt;h3&gt;CompTIA Exam Objectives&lt;/h3&gt;
&lt;p&gt;This will be your most important study resource. Each CompTIA exam has a set
of exam objectives, which you can basically consider your rubric for the exam.
It goes into great detail about what topics will be tested for on the exam.
You can find it for free on CompTIA's website, or various other places on the
web. Later on I'll write on how you can use the exam objectives during your
studies.&lt;/p&gt;
&lt;p&gt;The exam objectives will also have a huge list of acronyms that test takers
are encouraged to look over. Many people will make the mistake thinking they
have to remember every single acronym on that list and what it represents. That
is not really the case. While you should probably have some
familiarity with the acronyms, one should focus on studying the exam topics
themselves.&lt;/p&gt;
&lt;h3&gt;Professor Messer Video Courses&lt;/h3&gt;
&lt;p&gt;Professor Messer's video course was my primary study material for all three
tests. Messer's videos are nice because they short yet packed with information.
They do not tend to stray out of the exam objectives, so a lot of what you see
in the videos you can be sure will show up on the exam. Each of his videos
also has a slideshow to go along with it that is pretty convenient for copying
notes. The slides can be a little bit terse and uninformative, so I wouldn't
recommend just blindly copying them down. Messer himself will typically go into
more detail about each subject, so you'll want to combine what he's saying with
the notes on the slides.&lt;/p&gt;
&lt;p&gt;Messer's videos are good enough to pass, but many people say that you should not
solely rely on him to study. While his videos do cover all the material on the
exam objectives, he does not usually go into quite as much detail as the exams
do, so it is important that you pair him with some more detailed study material. I
paired him with the ExamCram book, which I will go into more detail on later.
I'd also like to add that Messer's Network+ course does a poorer job than his
A+ videos on going through the material. There are many sections in the
objectives that he does not touch, and there also many sections that barely
touches upon. I would definitely &lt;strong&gt;not&lt;/strong&gt; recommend on solely relying on
Messer for your Network+.&lt;/p&gt;
&lt;p&gt;Professor Messer also has two other useful resources for studying that often get
overlooked; his monthly study group livestreams on YouTube and his
weekly/semi-weekly quizzes on his website. His livestreams typically consists
of a PBQ-style question and around 5 multiple choice questions. They make for
some good practice. His pop quizzes are simple multiple choice questions. They
are not really anything like the questions on the actual exam, but they still do
make for some decent practice.&lt;/p&gt;
&lt;h3&gt;Professor Messer Tests&lt;/h3&gt;
&lt;p&gt;For $30, you can get a set of three 90-question practice tests from Professor
Messer's website. The tests consists of 5 PBQs, and 85 multiple choice questions.
These questions do a better job at emulating the exam's actual question style.
Also, like Messer's other study material, his tests stick strictly with the exam
objectives, so you will not have to wonder if a question on the test will actually
be relevant or not for the real exam. In my personal opinion, these tests are lot
better than Dion's practice tests. One critique I have with these tests is that
the PBQs are nothing like the actual exams. Most of the PBQs are nothing more
than just match the term with the purpose, the PBQs are on the actual exam are
more complex than that. The multiple choice questions are still fairly accurate.
Before taking the real exams, I was scoring around 85-95% on the Messer exams
first try. So aim for that range. Subsequent attempts can still make for good
practice, but subsequent attempts might not be as accurate gauges of your
readiness as it's likely you will have remembered the answers to many of the
questions. If you want to retake the practice tests, it's best to space them
out by a couple of weeks so that you will forget most of the questions.&lt;/p&gt;
&lt;p&gt;It should be noted that Messer never wrote practice tests for the N10-008, and
the 009 practice tests are not out yet.&lt;/p&gt;
&lt;h3&gt;Jason Dion's Practice Tests&lt;/h3&gt;
&lt;p&gt;Jason Dion sells a set of 6 90-question practice tests on Udemy for each
CompTIA certification test. I don't know if
Udemy still does this, but when I went to buy the tests, they would be listed
at ~$100. If you came back later or opened the site in a private window, the
tests would randomly go on sale to around $10-$20. So make sure to catch them
when their on sale.&lt;/p&gt;
&lt;p&gt;Just like with Messer's tests, Dion's tests are written to emulate the style of
wording of the real exams. Dion does this less gracefully than Messer, as many
of questions can be entire paragraphs of length only to end up asking a very
straightforward question that one could answer ignoring the rest of the
information provided by the question. Dion also likes to reuse questions from
older exams, even if they are not applicable to the current exams. If you feel
like a lot of the stuff in Dion's tests weren't covered in your studies, just
know that that's normal. The scenarios in Dion's PBQs are more accurate to what
you would find on the real exam, but suffer from the problem of usually being
simple multiple choice or select all that apply questions. Before I took the
exams, I was scoring in the 80% range, although I've heard many passed when they
were in the 70s. So I'd say aim for the 70s and above.&lt;/p&gt;
&lt;h3&gt;ExamCram&lt;/h3&gt;
&lt;p&gt;I received a digital copy both the ExamCram A+ book and Network+ book for free
from my library. Be sure to check your library to see if they have any study
material available (although beware that a lot of material may be outdated). I
did not read the entirety of either book. I would mainly read through sections
that I felt like I did not understand all to well from Messer.
That is the benefit with any
book as opposed to any other medium, they typically allow you to go much more
in-depth into a topic. I probably would've done just fine without this book, but
it was free so ¯_(ツ)_/¯.&lt;/p&gt;
&lt;h3&gt;Google&lt;/h3&gt;
&lt;p&gt;This one should be straightforward. If there is something in your studies that
you do not understand and none of your material adequately explains it, just
google it. The benefit of googling something is that you will usually find people
with practical IT experience explaining something, and they will often explain
the practical application of some topic. This can be nice because of a lot of
CompTIA study material tend to explain the concept behind something which can
make it difficult to comprehend it's use in the real world. Another benefit of
googling is you can find out how much stuff your studying is bullcrap. For
example, you probably will not have to know maximum distance 1000BASE-SX
multimode fiber can support off the top of your head.&lt;/p&gt;
&lt;h3&gt;The Process&lt;/h3&gt;
&lt;p&gt;In this section, I will detail my study process, start to finish.&lt;/p&gt;
&lt;p&gt;The very first thing I would do is go through the entirety of Professor Messer's
vidoe course. Throughout the course, I would take notes on anything that I
thought would be worth noting. I would typically get through a section a day.
If I encountered something that looked like it might be somewhat tedious to learn,
I'd try to get it down that day. For example, when I encountered all of the port
numbers, I rote-memorized them the day I went through that section so that I
would not have to put learning them off to some later date. I did the same thing
with subnetting, WiFi standards, printer troubleshooting, and some others that
I can't remember right now. After I finished a section, I would consult the
exam objectives to see if Messer covered everything. If anything was missed, I
would consult either the ExamCram or google.&lt;/p&gt;
&lt;p&gt;Once I had finished Messer's videos, I would then start digitally transcribing
my notes. Transcribing my notes gave me the opportunity to see if there were
any topics that I did not understand very well.&lt;/p&gt;
&lt;p&gt;Once I had finished my notes, I would then start employing a study strategy that
I call &amp;quot;dumping.&amp;quot; Basically, I would choose a random section in the exam objectives.
Then I would go through each topic in that section and write everything I knew
about it. If I was able to write a paragraph of information on that topic, then
that meant I probably knew the topic well enough for the exam. If I was struggling
to explain a topic, then that meant I did not understand it well enough and I
mark it down as a topic I would need to study more thoroughly in the future. During
this process, I would also add topics to the exam objectives themselves if I thought
they were relevant. For example, I added the bootrec command to section 1.2
on the 220-1102 exam objectives (the one concering Windows CLI tools), because
the bootrec command is important to know for many boot problems CompTIA wants
you to know how to deal with.&lt;/p&gt;
&lt;p&gt;Throughout this process, I'd also make sure to get some hands-on practice.
You don't have to get hands-on practice for every topic on the exam. For example,
here is some of the hands-on practice I got when I was studying for my Network+:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;I used Linux on my main desktop, so I got to have a lot of practice with using
CLI tools and software that CompTIA tests for. Playing around with the tools
gives you a much better idea of what they do and their practical application then
what reading about them can do.&lt;/li&gt;
&lt;li&gt;I also set up a very cheap homelab. I had an old Dell Optiplex micro that I
converted into a home server sitting next to my router. I installed FreeBSD onto
it and began setting up various network services such as DNS, DHCP, FTP, and
Syslog. I also learned how to use some networking utilities like SSH, nmap,
ifconfig, etc. If you have some old computer laying around your house that you
do not use like a laptop, you can easily convert it into a home server by
installing some Linux distro on it.&lt;/li&gt;
&lt;li&gt;I created plans for my own homelab network. I never actually bought the
equipment or anything, but creating the plans for one still taught me a bit
about networking equipment, layout, and design.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;While I was employing this study strategy, I would also take practice exams from
either Messer or Dion to gauge my readiness for the real exam. The moment that
I felt any sort of confidence that I could possibly pass the exam, that would be
when I would schedule the exam. In 2/3 of my tests, I would schedule it 3-4
weeks later. Due to money problems, I had to wait until I had money to schedule
the other exam, so I scheduled it a week later in that case.&lt;/p&gt;
&lt;h2&gt;The Exam Itself&lt;/h2&gt;
&lt;p&gt;Due to the candidate agreement CompTIA has you sign before you take the test, I can't
disclose any questions that appeared on my exams. But I'll try to explain what
the exams were generally like without violating the candidate agreement.&lt;/p&gt;
&lt;p&gt;The exam can have a maximum of 90 questions, but I believe all of my exams had
around 70-75 questions. I had 4-5 PBQs in all of my exams. Many speculate that
the more PBQs you have, the less multiple choice you get.&lt;/p&gt;
&lt;p&gt;I'll give some advice on the PBQs. The advice that everyone gives that I agree
with is that you should skip the PBQs and save them for the end of your test. I think
people tend to exaggerate the difficulity of the PBQs and make it seem like
they're impossible to prepare for. In my experience, the PBQs are basically
just slightly more difficult questions testing your knowledge of the exam
objectives. As long as you know the exam objectives, they should not be
terribly difficult. The most difficult PBQs you can get are ones that have you
do stuff at the command line. I only had those on my Network+ exam, and they
also weren't as scary as they seemed as you can type a help command that shows
you a list of what commands are available to you. All you really have to do is
just remember what the commands do.&lt;/p&gt;
&lt;p&gt;As for the multiple choice questions, again, I believe people to tend to
exaggerate their difficulty. In my experience, most of the questions weren't
too obtusely worded. There were definitely some that felt like compelte bull,
but I never felt like I was at any risk of failing the exam because of those
questions.&lt;/p&gt;
&lt;h2&gt;Exam-specific stuff&lt;/h2&gt;
&lt;p&gt;Here I'll briefly write about some stuff you should know for each exam.&lt;/p&gt;
&lt;h3&gt;220-1101&lt;/h3&gt;
&lt;p&gt;Know you're printers and how to fix them. Know the troubleshooting
methodology. Know your exam objectives.&lt;/p&gt;
&lt;h3&gt;220-1102&lt;/h3&gt;
&lt;p&gt;Know your macOS. Know your Windows administration programs. Know your malware
removal procedure. Know your exam objectives.&lt;/p&gt;
&lt;h3&gt;N10-008&lt;/h3&gt;
&lt;p&gt;Not relevant anymore :-/. Know your troubleshooting methodology.
Know your routing protocols. Know your exam objectives.&lt;/p&gt;
&lt;h2&gt;End&lt;/h2&gt;
&lt;p&gt;So that's all the advice I have to give about taking the tests. I might at some
point write about how effective the certifications are for actually getting a job.
But I'll probably save that for when I get more experience in the IT industry.&lt;/p&gt;
&lt;h2&gt;Links&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.comptia.org/"&gt;CompTIA&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.professormesser.com/"&gt;Professor Messer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.diontraining.com/"&gt;Jason Dion&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.pearsonitcertification.com/"&gt;Pearson IT Certification&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;



        </content>
    </entry>
    <entry>
        <title>1st</title>
        <link href="/pages/pages/posts/1.html"/>
        <updated>2024-10-17T19:00:00Z</updated>
        <summary></summary>
        <author><name>Samuel Young</name></author>
        <content type="html">

&lt;h1&gt;1st&lt;/h1&gt;
&lt;p&gt;This is my first post on this website. I guess the 'Posts' part on my site could
be considered a blog of sorts. This section is is for whenever I feel like
writing about something I'm interested in that I feel someone else might like
to read. It might be related to tech, it might not. Who knows. ¯_(ツ)_/¯&lt;/p&gt;



        </content>
    </entry>
</feed>
