An online markdown blog and knowledge repository.
A space for collecting thoughts and technical walk-thrus and takeaways during my coding journey through CY 2024.
Lots going on right now!
While defining a link fragment to enable a user to click an Anchor element and jump farther down-page to get to related content, I found that the Edit tool sometimes does not allow the anchor link to function. Also, when in publish preview mode, the link might not work, either. Attempting to fix the problem by adding a code block and inserting actual HTML code ('#location-to-jump-to') and ('_self' etc) would not work at all. Soon after adding the HTML code, the Editor page would hang. Frustrating. I guess I'll need to read up on this (seemingly obvious but somehow non-functional) topic.
-[x] Review how to add link fragments to a page in SQSP.
It turns out the challenge is related to how SQSP handles routing Hidden, Unpublished pages vs. Published. When a link points to a page that isn't published, the SQSP Routing processor doesn't allow viewing a Hidden, Non-published page, even for Administrators. So, testing bookmark links (link fragments) cannot be completed until the target page is published (even if it is the same page).
A recent challenge I made for myself was to create a printable-output webpage using basic web-page design concepts as well as React. The goals were to exercise skills using HTML, CSS, JavaScript, Bun, Vite, React-JS, and React-TS. Here are some takeaways:
print.css
file to define a specific @media print
statement with any additional rules to define the print media size, such as 4 inch by 6 inch, etc.border
and padding
statements that set a value greater than zero. In other words, on the root elements, set those CSS properties to 0px to ensure they don't end up causing more pages than expected to be printed.Completed "GitHub Copilot Fundamentals - Understand the AI pair programmer" Learning Path on MSFT Learn!
GitHub Copilot provides numerous ways to interact with it within VS Code. Depending on the approach, a slightly different context and specificity will be generated. Also, creating good prompts and providing enough context to begin with helps.
#file
), and generate test cases.As for learning TypeScript:
JSX.Element
.React.FC<{ propName: React.ReactNode }> = ({ propName }) => { ... }
./* Exporting a React TS Functional Component */
export const TextFieldPart: React.FC<{
cardKey: string;
cardValue: string;
}> = ({ cardKey, cardValue }) => {
...
}
Copilot provided plenty of inline suggestions along the way. The most helpful generated responses related to:
On occasion, GitHub CodeQL alerts are raised indicating moderate to severe issues with dependencies in my Portfolio project website. Turns out eslint
had some dependencies with recently discovered vulnerabilities. Thankfully, NPM's website has links to dependants and dependencies for published modules, and it didn't take long to find eslint
was the common parent, so a quick update fixed the issue.
This is a relatively new, local group of Mesh Networking enthusiasts in the greater Seattle area, covering topics of Meshtastic, AREDN, and HamWAN, primarily. I've been involved in Meshtastic and AREDN, so seeing this grassroots group grow is exciting.
Every year for the last few years in December, Advent Of Code releases a holiday-themed group of code challenges for developers to hack at, pretty much any way they want to. While somewhat gamified, it isn't as gamey as LeetCode and other code challenge sites, but provides some leaderboards and other statistics. I've challenged myself to try and work through as many of the Advent challenges as possible, using C#, through Christmas Day.
As of right now, the goals for December include:
In 2025 I hope to get started on:
...and make progress with:
Last week I completed a bunch of work on my portfolio website, proving to myself that I can work with someone else's code, make effective changes, update and replace existing imported components, and update data structures (albeit simple JSON ones here) and acquisition to augment the website as a whole. As mentioned before, there is much more to do in the coming weeks, but for now I have multiple other projects that need some attention to move them forward.
A few weeks ago I was recruited to help define an organization's web presence and basically replace their old website with a new one, on a new web hosting platform. Overall the detail on scope of work is a little spongy but I don't anticipate that to be a problem at this point. There are the beginnings of a plan, and my part is to help with sorting out technical issues during the planning process, and performing website content publishing, maintenance, and feature management for 2025. I'm looking forward to this experience and what it will bring in 2025!
Even before becoming a web developer, I learned from others that CMS platforms can be pretty difficult to work with. The complex array of limitations, cost structures, and sometimes unexpected results can make for a frustrating experience. I'm also aware that many, many developers are either gainfully employed as CMS Webmasters, or otherwise use a CMS as their creative-space outlet, often associated with income from product sales or membership fees to content consumers.
This new volunteer webmastering role I've taken on has put me in front of a partially deployed SquareSpace (SQSP) website that needs to be updated for events coming in 2025. I've spent only a few hours fiddling with pages, but here are some key takeaways.
Working with Links:
SQSP makes creating links fairly simple (not that it's difficult in HTML) owever, there is no facility to bookmark areas within a page. In the past, the website I'm working on has had a long, single-page resource for a major event they do. It is difficult for me (and probably others) to track all of the information on this very long page. So, as an idea to improve on this long-form layout, I tried to implement navigation "bookmarks" within subsections of the page to help visitors navigate all the information. SQSP does not support doing this directly. A work-around I tried was to add a code
block to the page and configure the anchor link to a specific ID, then add another code
block with the same ID configured. While this does work there are caveats (listed below) and I am working on a different way to approach solving this user experience problem.
Editing SQSP Pages:
Adding content to pages is somewhat frustrating. For example, an added Text Block appears near the Toolbar, often overlapping existing content. Then, the Text Block must be clicked in just the right spot in order to start adding text. Once the Text Block is on the page with the desired content, inevitably it will need to be moved and resized somehow. While moving the Text Block, SQSP editor tries to do some resizing and centering calculations, sometimes causing the Text Block width and/or height to expand for reasons I don't understand. Another side-effect of moving Text Blocks around and resizing editing areas, is the Text Block configuration is sometimes changed unexpectedly, i.e. H2 style is changed to H3. I'm used to having more control of these website elements, so this will take some getting used to.
Adding Images to Sections:
This is made fairly easy. The workflow is straight-forward and manipulating the image size and location isn't too difficult. Even changing an image shape is pretty simple. The complexity comes in when trying to make the image accessible. There is mention of adding captions to images, but no where in the Edit UI have I found that capability yet. Which is weird, because there's even a "Lightbox" style that can be enabled (creates a modal with the image enclosed along with a styled caption, if there is one). For now, the best that can be done is to add alt
text to the image, and a Text Block near the image to describe it.
I started working on yet another side project with the goal of re-learning core website design, development, and style concepts.
Here are some key takeaways from this experience (so far):
display: flex
can start to get complex when it comes to some display issues. For example, it pays to be stingy with flex, and to always consider what really needs to happen with the layout before writing a flex container. Once I got the hang of the flex direction and cross-axis confiruations, it has become easier to implement flex in those times when it is really necessary.height
or width
(or better yet both) so that the browser can calculate proper scaling of the image for the container it might be rendered within. I recently heard about a newer way to tell the browser to select a right-sized image for the display port size in use. srcset
and sizes
are used together to clue-in the browser as to which image will be best to use for the display size. Of course, multiple images will need to be available so it can be sucked in and rendered.@media
queries to set the correct width:
value depending on the viewport size. I think this slows loading the page due to the scaling that happens during render after css is loaded and applied.<picture>
. Using <picture>
as a parent element to multiple <source>
element and a (default) <img>
element, so the browser can select the correct image during image preloading instead of right before rendering. Cool stuff!I took a look at what a Monorepo is, finally. Here are some key takeaways:
However, there are some drawbacks:
Tools:
shallow clone
: --depth <depth>
limits clone history to the specified number of commits. See also --single-branch
.filter-branch
: Essentially re-writes history in a non-performant way, with multiple hazardous side-effect.It turns out I've been using (a very small version of) this concept. Whenever I work within a multi-project Solution in DotNET, it is effectively instantiating a monorepo for a "solution of projects". While implementing vertical changes that impact another project in the Solution, I update the impacted project code and that becomes part of the PR (and therefore the update/new version) without any need to open a new PR in a separate, dependant project.
An interesting blog article about monorepos can be found on Semaphore CI's blog.
The book arrived! Several years ago I was introduced to THe Coding Traing YouTube Channel, which is the media output part of The Coding Train. Daniel Shiffman promotes learning and fun through JavaScript and P5js (primarily). He's written "The Nature of Code" as a means to help develop coders ability to mix their imagination with learning to code and implementing solutions. So, this week I worked through Chapter 0, which provided a basis upon which the rest of the book will focus: Random numbers and probabilities, coding physical behaviors, and working with trees, networks, and other datastructures and algorithms.
After reading through and completing the exercises in Chapter 0, there is an end-of-chapter challenge: Create a project that uses concepts learned in the chapter. Following chapters will add new concepts, which can be added to this growing "what I learned" project. I generated a scene using an open-source stock image of a Falcon, flying randomly (but in a semi-natural looking way) over a picture I took of a campsite I stayed at several years ago. The project will eventually appear in my GitHub.
While I don't have a ton of time to be doing this, I've decided my goal will be to complete one chapter per week through the rest of this year. My overall goal is to complete all the exercises in the book by the end of January 2025.
While working through a very small exercise side-project (build a QSL Card form for online post-card generation), I discovered that my interchangeable use of EM and REM was causing some surprising results in font sizing, padding, and other EM- and REM-unit supporting CSS properties...so I looked up the difference:
<html>
.The compounding effects of EM usage in element trees caused problems for me. While debugging, I used the Developer Tools to determine how the font size (or padding, etc) were computed, and it turns out that editing a parent element EM property also impacted the child element.
Once I identified this trickle-down effect, I discovered that using REM eleviated the problem in these nested scenarios but in some cases where resizing the screen were invovled (such as moving from desktop-sized to phone-sized), it occasionally made sense to allow the compounding effect of EM work some magic for me.
The key takeaway is to use rem
to ensure the unit is based on the root <html>
element unit, and to only use em
when it is desirable to have the element sizing change based on the sizing change of a parent element.
I completed reading-up on GA4. My notes can be found in my notes about google analytics.
Multiple events the last few weeks have caused some disruption in my development cycles, note taking, learning cycles, etc. Also, jury duty calls, which might suddenly interrupt and cause uneven productivity here.
Next version is nearly ready to publish, after working through and implementing the logic to support Closed ATX and Alternate Style headings, performing some refactorings, updating unit tests, and validating readiness through manual tests!
Some key takeaways and things I said to myself (and out loud) while working through this project:
I completed publishing a Release version of Create-ToC set at version 0.4.2. Pretty much right after publishing I discovered a few bugs. I need to update my development processes to be certain the following steps are completed:
Doing these things will help keep my workflow organized, even when I have to step away from the project for some time between bugfixes and version releases.
After a bit more debugging, some code refinements, added documentation, unittest fixes, and adding manual-testing files, version 0.4.3 is now available as a Pre-Release version. In a few days, after some regular usage, I'll Publish a 0.4.4 Release version.
While publishing the pre-release at 0.4.3, I wanted to document some of the operation of the code for personal purposes.
markdown-toc.createTOC
and loads an anonymous async function as the second parameter, setting both as disposable
objects.findTopHeading()
is executed, and the results are stored in an object to identify the Heading style (Open ATX, Closed ATX, or Next Line) as well as the Line Number the top (Level 1) heading is found.match()
function is called to check for an existing Table of Contents. If there is at least one match, a Warning Message is displayed and execution returns.getLevelHeading()
is called within a for
code block and positive results are stored in a local array. Inner-functions getTitleOnly()
, which replaces illegal Heading title characters, and either getHash2LH()
or getDash2LH()
are executed (depending on the style) to acquire the correct Level 2 Headings, ignoring any other text or headings levels.For
loop exits, if there are no items in the array, a WarningMessage is displayed and execution returns null.createTOC()
is called, which in turn calls getTitleOnly()
which replaces illegal Heading title characters, and then getLoweredKebabCase()
which (as its name states) forces lower-cased characters and replaces any whitespace characters with a dash -
(except for newline and carriage return, which are ignored). Lastly, function getLinkFragment()
is called which properly formats the Title and Lowered Kebab Case outputs into an appropriate (lint-able) Link Fragment.getTitleOnly()
is fed into a VS Code API edit.insert()
function, which adds the formatted string data to the active document.workspace.applyEdit()
function is called to 'write' the formatted string data (the new Table of Contents) to the working document so the user can see it and save it.push()
is called on API ExtensionContext
to add the disposable
veriable to the ExtensionContexts.subscriptions
Array.The 0.4.4 release is now Published to the VS Code Extension Marketplace!
^
: Start of string. In Multiline mode, this matches immediately following a \n
(newline) character.$
: End of string. In Multipline mode, this matches immediately prior to a \n
(newline) character.//m
regex matching, it might be important to include ^
and $
anchors but it is crucial to include the context of where \n
characters are in the intended match!string.match(/regex/opt)
built-in returns either null
(no match) or an array of one or more match items. It is not a boolean return!\n
or \r\n
. JavaScript string.match(/regex/opt)
can search for those characters and it is up to the implementor to decide how to leverage the RegExp. Examples below show two ways to execute the same queryconst regexpPattern1 = ...; // some pattern
const regexpPattern2 = ...; // some other pattern
// might be easier to read
const pairedOrLogic = inputText.match(/^regexpPattern1$/gm) !== null
|| inputText.match(/^regexpPattern2$/gm) !== null;
// might be more succinct
const groupedMatches = inputText.match(/^(?:regexpPattern1|regexpPattern2)$/gm) !== null;
// 1) using `^` and `$` along with `/gm` could cause the regexp to consume more resources than desired
// 2) optionally use a quick-exit technique with `/m` so string.match(regexp) returns after first match
return inputText.match(/^(?:regexPattern1|regexPattern2)$/m) !== null;
But working with RegEx is very tricky and there are many ways to approach pattern matchine. Here are some questions to ask while figuring out RegEx patterns:
string.trim()
away leading and trailining whitespaces?I took time to update My Portfolio Web Site:
The original project used bootstrap tooling CRA (Create React App) which is known to have some limitations and is also fairly stagnant, so I decided to challenge myself and move to Vite tooling instead.
package.json
changes the scripts section (of course, to call Vite to drive dev, debug, and build operations).type
property be set properly. In this case the type should be 'module'.package.json
so that was removed.index.html
and is in the src
folder by convention. For Vite, the instructions requested it be moved to the root of the project. My belief is that Vite's build function looks for this file in this location in order to create the deployable assets to launch a live web site.jsx
filename extension.SASS
(actually "Dart SASS" according to the developers), so the SCSS files were compatible and needed only minor edits to ensure they used up-to-date syntax such as @use
instead of @import
.Getting the Web Site to deploy to Netlify using Vite was only slightly challenging:
Publish Directory
had to be changed to be the default vite build
output directory :arrow-right: {root_dir}/dist
.Note: Ubuntu released 24.x (Desktop, Server, etc) as an LTS version, but Netlify's JamStack is pinned to Ubuntu Focal 20.04
which appears to be an LTS release with general support until May 2025, and ESM (Extended Security Support) until 2030. So there is a good chance Netlify will make (hopefully) Ubuntu Noble 24.x
LTS sometime in the coming months.
Many more updates are in the works, providing opportunity for me to learn and grow my website building skill sets.
I completed the tasks for a Pull Request with major updates and deployed the site without issue. There is still plenty to do (especially in terms of accessibility) and my plan is to take on these issues over time and incrementally improve the site UX for site visitors, and to keep myself plugged-in and working with React and webapp building and maintenance.
So many events, so little time!
As I have had time, I've attended some online informational sessions about AI and DotNET, worked on updates to my Create-ToC VS Code Extension, and make some connections with other developers and a couple of organizations that might utilize my technical skills. There is a lot going on right now between learning, volunteering, networking, coding, and life in general.
That is the question. While working through updating my Markdown ToC extension, it became clear that my design suffers from difficulty in testing and extending. The latest push has been to enable Create-ToC to recognize both Open ATX and Closed ATX headings styles, and follow-suite when generating the Table of Contents.
Some refactoring of implementations into modules, and removing extraneous module functions results in a more testable, and simplified implementation. Unit tests were also refactored to test the updated JS Modules and their functions.
But there's more work to do! Unit tests are still failing, so those issues need to be worked out, and once that's done the README documentation must be updated, and the version incremented for publication. This time, I want to publish the next version as a full release, rather than a pre-release (as I did earlier this year). I'm looking forward to having an updated functional utility in a public marketplace!
For much of week 29, I was out of pocket not feeling well so not many updates were made during this time.
During weeks 30 through 34, I had several events and many meetings to attend to. There is not as much to report, but some summaries are included below.
During the massive merging party, preparing for the latest 2.x release of the form, a few functions were not well tested enough to know they were incorrectly implemented. I'm pretty sure this was a result of interrupted development that was not followed-up and validated properly. This required pushing some quick-fix commits.
Key takeaways here:
Recently I started looking into getting the BF-BMX API Server to return information on what it has stored in its DB. At first this was exploratory, but during the last week or so I have turned a corner in my thinking and decided to develop a preview of a Reporting Server that will simply render information about the stored data.
The following items are a rought overview of the remaining work I'd like to get done before this year's event:
This is really exciting to me and I look forward to having this tool to keep on top of participant data at this and future events!
I've just completed several days of focused work on the BF-BMX Reporting Service, and it is looking and working fairly well. There are still some bugs and nits that need to get addressed, but for the most part the solution is ready for this year's event, and I'm starting to trust the data displays it is rendering.
Some key takeaways, highlights, and lowlights of the last few days:
@onclick
to fire a callback to handle clicking on one of the named items in the list. This can be helpful to a user that maybe doesn't have a keyboard, doesn't type quickly, or otherwise might make mistakes. By clicking on a list item instead, the user performs a single-click, and the code is guaranteed to get a valid input. It's a win-win!app.css
in the wwwroot
folder. This way, I could simply comment-out CSS classes in the custom file and in the root app.css
file and discover what was still in use and what I could safely move to my new, custom classes. I'll have to file this away and use the approach in the future to help with future CSS migrations.A recent conversation with a ham friend resulted in a renewed interest in computer networks and automation, so I've been looking into Kismet Wireless and working with Linux cron and registering custom services. I'm already planning to use RPi's at a few upcoming ham events, so I'll integrate some of what I learn into building and configuring Pis.
A couple weeks ago I replaced my VHF omni vertical with a VHD beam antenna. The omni antenna is better in windy or icy conditions so I tend to have it up during the darker months, but there is a local RF problem (reflections or some other RF emitter) and the omni receives those all too well. The yagi is able to avoid those noise issues with a more focused view, and the rotator allows changing direction remotely. However, I haven't tuned the yagi since I last put it back together, so I'll need to run some diagnostics to find out if the tuning is out of band, and make requisite changes.
Last week I did maintanance on my HF antenna and followed-up with some experimentation to try and improve its performance on many bands that I want to use. It turns out my previous installation using a 9:1 unun with a 80-ish foot hot wire and a 25 foot ground wire was not a great solution. I've been using it for years, but had to be careful about what bands and modes I used due to poor tuning in multiple areas. After experimenting and reading more about off-center-fed dipoles and end-fed long wire antennas, I decided my OCF implementation was faulty. So the antenna was refactored to follow advice from Palomar Engineers by shrinking the main radiating element and removing the ground element completely. Now the antenna covers more bands than before, and performs better on sub-bands I wanted. </otherstuff>
I took some time out to review some missed Build 2024 sessions, and updated documentation accordingly.
Microsoft Reactor is hosting an online ".NET Aspire Developers Day" where multiple speakers will discuss and demonstrate .NET Aspire use cases and implementation details. See DotNET Aspire for notes.
The event was fun an exciting, but multiple twists made for a very different experience this year. The BF-BMX tool came in handy at my location, especially with the BFBMX.Reports tool that I had started developing in July:
In the near future, a meeting will be scheduled to discuss BF-BMX performance, usability, and planning for v2. This will probably wait until October, given how busy September is shaping up to be.
In July I put some effort into implementing a bugfix and new feature, and completed a preview version publication. Unfortunately, it is not ready for full release yet. I have a work item in my backlog that will be promoted forward to fix and increment the pre-release version, and implement another new feature for a next minor version Preview and release.
Internally I have a goal to get the extension into a capable, reliable state before the end of 2024. I use the tool almost every day that I code, so having a stable, helpful tool that I built myself is really rewarding!
I manage to complete a couple Leetcode challenges:
It has been long enough since I last worked on a Linked-List DS&A challenge that I really had a hard time completing the first challenge. After throwing around some ideas, and attempting to implement them, I had to stop what I was doing, reconsider what I think I know about Singly Linked Lists, and start over before getting the solution.
Removing duplicates from a sorted Integer array wasn't too difficult. Early on I recalled how to utilize a HashSet to maintain a unique collection, and managed to get a solution working within a single iteration over the input array. The final solution (and the best performing) was one that borrowed ideas from sorting algorithms where only the indices were tracked, and when certain conditions are met either one or both indices are incremented, or the value from the right index would be used to overwrite the value at the left index. This made a big difference in performance and code simplicity and readability.
Merge Two Sorted Singly-Linked Lists:
Function: MergeTwoLists
Input: ListNode LeftList, ListNode RightList
Output: ListNode
Instantiate: ListNode OutputNode <- new
If: LeftList EQ Null
Reassign: OutputNode <- RightList
Return: OutputNode
Else If: RightList EQ Null
Reassign: OutputNode <- LeftList
Return: OutputNode
If: LeftList Value LE RightList Value
Reassign: OutputNode <- new ListNode <- LeftList Value
Reassign: LeftList <- LeftList Next
Else:
Reassign: OutputNode <- new ListNode <- RightList Value
Reassign: RightList <- RightList Next
Initialize: ListNode OutputTail <- OutputNode
While: TRUE
If: LeftList NOT Null AND RightList NOT Null
Switch on Comparison: LeftList Value, RightList Value
Case: -1
Reassign: OutputTail Next <- new ListNode <- LeftList Value
Reassign: LeftList <- LeftList Next
Case: 1
Reassign: OutputTail Next <- new ListNode <- RightList Value
Reassign: RightList <- RightList Next
Case: 0
Reassign: OutputTail Next <- new ListNode <- LeftList Value
Reassign: OutputTail <- OutputTail Next
Reassign: LeftList <- LeftList Next
Reassign: OutputTail Next <- new ListNode <- RightList Value
Reassign: RightList <- RightList Next
Reassign: OutputTail <- OutputTail Next
Else If: LeftList NOT Null
Reassign: OutputTail Next <- new ListNode <- LeftList Value
Reassign: OutputTail <- OutputTail Next
Reassign: LeftList <- LeftList Next
Else If: RightList NOT Null
Reassign: OutputTail Next <- new ListNode <- RightList
Reassign: OutputTail <- OutputTail Next
Reassign: RightList <- RightList Next
If: RightList Null AND LeftList Null
Execute: Break
Return: OutputNode
Note: The Switch-Case block ignores the C# rule that a Default
statement should be last. Other languages might not require this, so for simplicity of writing pseudocode I skipped it.
Remove Dupes From Sorted Array
Function: RemoveDuplicates
Input: NumsArray
Output: NumsCount
If: NumsArray Length GT 2
Return: NumsArray Length
Initialize: LeftIdx <- 0
Initialise: RightIdx <- 0
While: LeftIdx LT NumsArray Length
If: LeftIdx EQ RightIdx
Reassign: RightIdx <- Increment 1
Continue: (next iteration)
If: NumsArray at Index LeftIdx EQ NumsArray at Index RightIdx
Reassign: RightIdx <- Increment 1
Else:
Reassign: LeftIdx <- Increment 1
Reassign: NumsArray at Index LeftIdx <- NumsArray at Index RightIdx
Reassign: LeftIndex <- Increment 1
Return: LeftIndex
Just for the record, I only write these solutions out as practice:
It is up to readers of this rambling blog to do the right things.
The interface is pretty simple:
The challenge is with embedding version information into the application. For example, while developing new features, so long as they are compatible with previous releases, the Minor version should be incremented. Also, for bugfixes for the Minor version should increment the third number using the semantic versioning system. If the versioning is embedded into the code, then as dev branches are merged-in to the staging branch prior to release, the versioning information will get overwritten. If a particular Minor or Bugfix version increment does not make it to Staging, then the numbering system leading up to 'latest' will appear to skip numbers, and the correlated commits to Staging won't explain why the versioning is not orderly.
If I relax my view of how semantic versioning works, this really isn't a problem. But I have to ask the question how to work through (or around) this so the numbering system will work during pre-release testing and demos, acceptance testing once changes are staged, and for final versioning before official release. I'm certain there are tools and techniques to get this to work more easily
After a few days of juggling more ideas on how to handle users' input of time in 24-hour format, I settled on a set of functions that carefully identify and process the hours and minutes bsaed on whether or not a colon is present.
It is very difficult to anticipate and cover every possible input from a user, so I made some assumptions about common inputs and mistakes (based on my own experience) and will convey the expected behaviors to the end users.
After a couple more surprise bug fixes, I've decided to release v2.1.4. A demonstration will take place during a Monday night Zoom session with the team lead and other Bigfoot volunteers.
NOAA and the NWS updated the weather API, which broke my latest Mob-WX updates. I have a work item on my backlog to fix the issues.
Thinking further out, it would be a good idea to develop and deploy an API Gateway so that the mobile app doesn't have to break and get revisioned and instead, silent updates can happen at the API Gateway that will support several minor version releases of the mobile app itself.
Plenty more work will be necessary to make that happen and I anticipate it will be fun and interesting.
It has bee 1 year since I released my first VSCode Extension and it is in need of several updates and promised feature delivery:
Last weekend I started working on addressing the above issues, as well as preparing to update Github Actions to enable build and publish capabilities.
Overall: Success! There is more work to do to ensure that pre-release publish only happens at a particular action. For now I've set it to a particular branch. A better change (later) would be to only publish on a particular tag, which I'll figure out some other time.
Worked on my Mobile Weather App, fixing bugs. There are some architectural issues (I'm now realizing) that will need to be addressed over time. For right now though, it should be fine. Some takeaways:
I also read about Extension Methods in C# (F# and Visual Basic too) and made some notes about extension methods. Some key takeaways:
this
and a type parameter that matches the origin Class that the extension will use.using
directive and then call the Extension Method as if it were the target Type's instance method.Visited Rob and Phil to work through setup, deployment, and usage of BF-BMX Desktop and Server components across multiple computers, WiFi networked, with Winlink Express for sending/receiving messages with bib data.
Some key takeaways:
To combat the last issues in the last bullet point, I make some changes:
I decided that a Failure activity log entry was not necessary for failed attempts to send data to the Server because a legitimate scenario is to run the system without a server in the mix at all. In the future I may revisit this and avoid logging these errors when it is known a server-side will not be included.
A new RC will be posted to the BF-BMX project site which, to the best of my estimating, will probably be the final version before this year's event.
I've been busy on several fronts. In software development I continued updating the Bigfoot Bib Report Form, and also started updating my Mobile Weather App. When I realized I'd not done any code challenges for a few days I did some deep diving into Trees. Rod Stephens' book [Essential Algorithms] has been useful and this time I drove right-on through the entire Trees chapter, implementing pseudo-code into real code, and completing challenge questions along the way.
Some key takeaways from the last 5 days:
H * H - 1
steps.While working through the Trees chapter I was using JavaScript, however the code was essentially the same as this C#:
public class MyTreeNode
{
public int Data {get;set;}
public MyTreeNode? Left {get;set;}
public MyTreeNode? Right {get;set;}
private List<int> Visited {get;set;} = new();
public List<int> getNodesInorder() {
this.Visited = new List<int>();
traverseInorder(this);
return Visited; // let the caller deal with the result values
}
public void traverseInorder(MyTreeNode currentNode)
{
if (currentNode.Left != null)
{
traverseInorder(currentNode.Left);
}
// process currentNode, here it is added to a Visited list
this.Visited.Add(currentNode.Data);
if (currentNode.Right != null)
{
traverseInorder(currentNode.Right);
}
}
}
It's also possible to use a while()
looping structure to do this, and there are pros and cons to each:
Of course I'm off-track, having been distracted by an interview question: "Which sorting datastructure uses no additional storage?"
First of all, I need to continue training my mind to ask sorting-algorithm questions (to myself or otherwise) so that I can hone-in on a reasonable solution:
Looking at Big-O Algorithm Complexity Cheat Sheet here are some possible answers to the original challenge question:
In line with this thinking, I completed developing Heapsort in JavaScript following guidance in Rod Stephens' book [Essential Algorithms], and updated my repo with the code. The README walks through the Heapsort algorithm.
I've been working on updating a Form used in the HAM community to track event participants, such as a marathon runners. The form is designed as a single-page web form with HTML, CSS, and JavaScript for layout, style, and functionality. In the original form, there is some focus on maintaining compatibility back to the Windows 7-era (about 2011).
Using MDN and CanIUse.com to determine what JavaScript built-ins would be safe to use because very difficult and tedious. I used a spreadsheet to help track what I'd already looked up and to record compatibility levels of built-in methods, statements, and expressions.
I discovered a better way: Why not just stick with the methods that are already in use by the form, and avoid adding newer methods until there is a clear signal from users that upgrades to a newer era (Windows 8.0) can be implemented?
This allows for deferring research until later. Instead, existing compatible techniques can be implemented right away, keeping momentum moving forward.
Managed to get .NET MAUI 8 building with artifact generation in GitHub Actions. Some key takeaways:
.github/workflows
dir, and .yml
naming and locating are recommended.on: pull_request
and on: push
might cause unnecessary, additional Action executions. Stick with a single on
Action for a file, or use GitHub "Reusable Workflows".on: workflow_call:
. Create other workflows that reference the template by adding jobs: template-name: uses: ./path-to-template.yml
.dotnet workload restore {projectFile.csproj}
in order to build properly, and the working directory must be set to that Project's directory specifically.dotnet publish --artifacts-path {path} {rest-of-commands} ./{publishDir}
in the run:
statement, and then in a following Task, uses: actions/upload-artifact@v2
, with: path: ./{publishDir}
..NET MAUI 8 is a pretty challenging framework to work with. I love the results, having a Windows Desktop App that is (very basically) also an Android App - great stuff! In hindsight, I should have looked into Xamarin several years ago when I got going with .NET.
Recent activity has been to start-up a second sprint to update and build-out my mobile weather app, and prepare it for deployment in the Google ecosystem. There are many hurdles to overcome, but I've knocked out a couple so far:
dotnet
CLI for running build and test operations.dotnet build
and dotnet publish
are very similar, and support overriding existing csproj
file configurations by naming elements and setting their values, such as -p:AppxPackageSigningEnabled=true
. I'm not certain I understand just what all can be overridden, but I plan to experiment with it.Spent a good amount of time debugging, adding-on to, and prepping the Bigfoot Bib Report WL Form project for the next big version. It's not clear whether it will be pressed into service - I'll have to get a few friend hams involved to take a look at the form, find issues, and provide feedback.
Some things I learned along the way:
<label>
that displays information related to three different buttons. In this case it isn't an error, but probably is a problem for accessibility, and I'll need to consider a work-around.icon
reference missing, not the icon itself. I learned these need to be 30x30 pixels, and can be .ico
, .png
, or .jpg
. So I just added <link rel='icon' href='favicon.png'>
and the console log warning went away (for Chrome, not for Edge though). With this experience, I'll know how to implement this properly for the next website I build.After 3 days of focusing on learning modules and new concepts, I took a break and worked on some field operations planning and setup involving a Raspberry Pi Zero2 W. Working with Linux is getting easier, and the biggest issue has been finding consistent documentation. Not all docs are created for the version of RaspberryPi OS that I'm working on, so some packages either aren't available, or don't work (properly or the same). Also, some documents leave a lot to be desired, for example a manpages
on debian.org had a lot of "should-be" and "probably" remarks in it, which doesn't sound all that promising.
Going forward, for certain projects I'm going to stick with Bullseye 32-bit for any legacy or micro RPi projects for now, including the RPi 4. If I get my hands on an RPi 5 and/or Bullseye approaches EOL for those legacy RPis, I'll start moving over to Bookworm.
On Saturday I completed all 19 modules of "Accelerate Developer Productivity with GitHub and Azure for Developers"! This took a big effort, and I took (way too many) notes. Thanksfully some portions covered topics I already has experience with, and the other areas were great fill-in to help me build out my skill sets and build up experience.
I took some time out to learn more about and make notes about Polyglot Notebooks. I have much more to learn but these seem like a good tool.
It's been quite a while since I've reviewed Tree data structures.
Terminology:
External Node
.Types of Nodes (and therefore, trees):
Some Basical Calculations:
2^(Height - 1) - 1
.log2(N + 1) - 1
.2 ^ Height
.N + 1
.N2 + 1
.N / 2
Nodes.BigO Analyses (N = Nodes):
O(log(N))
steps.O(log(N))
is a good starting point (it can only get worse from that worst-case starting point).
Coding Trees:
public class BinaryNode
{
public int Data;
public BinaryNode? LeftChild;
public BinaryNode? RightChild;
public BinaryNode(string data)
{
Data = data;
}
}
// instantiate Nodes
BinaryNode root = new(4);
BinaryNode node1 = new(1);
BinaryNode node2 = new (10);
// etc
// build a Binary Tree
root.LeftChild = nod1;
root.RightChild = node2;
// etc
And for an N-Degree Tree:
public class TreeNode
{
public int Data;
public TreeNode[] children;
public TreeNode(string data)
{
Data = data;
}
}
Note: It is possible to add a Parent
reference to BinaryNode or TreeNode so that it is easier to traverse 'up' the Tree.
Information about Branches can be stored if necessary, but this topic is more relevant for Graphs and Networks.
Traversing A Tree:
Preorder Traversal:
TraversePreorder()
method with a Node instance and it will do the rest.public class TreeNode
{
// Fields and CTOR
public void TraversePreorder(BinaryNode currentNode)
{
// process node here e.g. push Data into an array, output to console, etc.
if (currentNode.LeftChild is not null)
{
TraversePreorder(currentNode.LeftChild);
}
if (currentNode.RightChild is not null)
{
TraversePreorder(currentNode.RightChild);
}
}
}
Note: It is possible to add a helper Class that will do this for a Node. This allows a single object to store results from the method, such as an Array of traverse Node values, rather than storing the structure within each Node.
Note: This traversal can be used to traverse N-degree Tree Nodes with N greater than 2.
Inorder Traversal:
Symmetric Traversal
.public class TreeNode
{
// Fields and CTOR
public void TraverseInorder(BinaryNode currentNode)
{
if (currentNode.LeftChild is not null)
{
TraverseInorder(currentNode.LeftChild);
}
// process this node here
if (currentNode.RightChild is not null)
{
TraverseInorder(currentNode.LeftChild);
}
}
}
Note: This is effective for Binary Tree Nodes, but is ambiguous for Tree Nodes with more than 2 Child Nodes.
Postorder Traversal:
public class TreeNode
{
// Fields and CTOR
public void TraversePostorder(TreeNode currentNode)
{
if (currentNode.LeftChild is not null)
{
TraversePostorder(currentNode.LeftChild);
}
if (currentNode.RightChild is not null)
{
TraversePostorder(currentNode.RightChild);
}
// process this Node here
}
}
Note: This traversal can be used to traverse N-degree Tree Nodes with N greater than 2.
Breadth First Traversal
public static class BinaryTree
{
// Fields
public Queue<BinaryNode> LevelNodes = new();
public void TraverseBreadth(BinaryNode currentNode)
{
LevelNodes.Enqueue(rootNode);
while (LevelNodes.IsEmpty == false)
{
BinaryNode currentNode = LevelNodes.Dequeue();
// process the current node e.g. output its Data or store Data in a string or array, etc.
// add children (the next Level) to the Queue
if (currentNode.LeftChild is not null)
{
LevelNodes.Enqueue(currentNode.LeftChild);
}
if (currentNode.RightChild is not null)
{
LevelNodes.Enqueue(currentNode.RightChild);
}
}
}
}
Leveraging FIFO ordering guarantees all Nodes at a particular Level are processed before moving to the next Level.
BigO Analyses:
N / 2
, therefore O(N).That's enough for now. It's always good to review these concepts. One day, it will be much easier for me to grasp and use them.
MS Build 2024 is happening this week and will consume a large chunk of time. I have a schedule set, and am looking forward to learning all that MSFT and their partners have to share!
Start
+ x
to open a context menu of tools like Settings, Device Manager, and Disk Management. One other is Ctrl
+ Shift
+ Esc
which opens the Task Manager.WinUI 3 Gallery Tools
project!There were many other interesting takeaways, many more details I want to explore, and several MSFT Learn Modules I want to work through now. That is a sign of a successful event!
I've learned that Semantic Elements are helpful when developing accessible, screen-reader-ready websites. While going through Microsoft Learn Blazor Modules, I've been trying to reinforce what I've learned by using. Here is a list of common Semantic Elements, sourced from Mozilla Developer Network (MDN):
<body>
: The HTML content of the document. There can be only one! This is the root of all sectioning.<address>
: Enclosed HTML has contact information for 1 or more people or an organization.<article>
: Self-contained composition within a document page, application, or site. Intended to be reusable content e.g. a Forum Post, Newspaper article, etc.<aside>
: Enclosed content is only indirectly related to document's main content.<footer>
: The bottom of its nearest ancenstor sectioning content, or the root (<body>
). Usually includes content author(s), copyright data, and/or links to related documents.<header>
: Introductory content or an area where Navigational aids are contained. May also contain a logo, search form, author, or other elements.<h1-6>
: Six levels of <section>
headings. Will be auto-formatted by default CSS. Note: I try to stick with only 3 heading levels. Note: For the sake of screen readers, it is better to re-style these elements and use 1 as the top-level, than it is to use a highest level of 3, for example.<hgroup>
: Heading grouped with any secondary content e.g. Subheadings, Alt Titles, Taglines, etc.<main>
: The dominant content of the body of a document, or central functionality of an application.<nav>
: Section of the page that contains navigation links relative to the current document, and/or other documents. Can be used to define Table of Contents, menus, indexes, etc.<section>
: Generic standalone portion of a document. The least-specific semantic element. Note: Be certain to include a heading!<search>
: Form or controls used for searching or filtering operations.The above is the list of content sectioning elements. I like to think of these as structurally significant elements that are critical to developing an accessible webpage from the start. There are many other element types that can (and should) be used as designed, here are the groups:
<html>
.<head>
.<body>
.<body>
tags. These are useful for Accessibility and SEO, and related directly to the content they wrap.<strong>
or CSS font-weight
instead of <b>
.<ins>
and <del>
elements used for added or removed content (presumedly in a versioned document representation).<form>
element! Check out MDN HTML Element #Forms for details.<details>
wraps information that is visible only when toggled "open". <dialog>
wraps an alert or subwindow. <summary>
provides a legend and also acts as the toggling function for the <details>
element, to open or close it when clicked.<slot>
is used to provide a markup window for a separate DOM Tree within the document, and <template>
sets a placeholder for where content will eventually be added, usually via a script or other DOM-editing function.There are also a ton of deprecated elements that should not be used.
While working through Blazor training, I kept finding myself opening Firefox or Edge and typing-in the localhost and port of the running Blazor Server, because I didn't want to be forced to use Chrome or switch the OS default browser to test a site on other browsers.
This took just a little investigating, but here is what I did:
launch.json
file contains a collection of launch configurations that fills the list of options in the Run And Debug tool's F5 button.serverReadyAction
and launchBrowser
, the former being a new feature, the latter being an older (but still supported) feature.serverReadyAction
is added and configured, which calls the OS-default web browser, or the VS Code-configured default browser, if edited. That's fine, but if I want to have a selection of browsers to launch, I need to have multiple configurations to choose from.Here's a sample showing only the configuration item that launches Firefox (other configuration items were omitted):
{
"version": "0.3.0",
"configurations": [
{
"name": "Launch In Firefox (web)",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "build",
"program": "${workspaceFolder}/bin/Debug/net6.0/BlazingPizza.dll",
"args": [],
"cwd": "${workspaceFolder}",
"stopAtEntry": false,
"launchBrowser": {
"enabled": true,
"args": "${auto-detect-url}",
"windows": {
"command": "${env:ProgramFiles}\\Mozilla Firefox\\firefox.exe"
}
}
},
{
"name": ".NET Core Attach",
"type": "coreclr",
"request": "attach"
}
]
}
Be certain to verify the bin/Debug
dotnet version in the path and the dll
filename are correct.
When I was building my .NET MAUI application "Mob-WX", I built a Blazor Server that could accept APK files and serve them up for rapid deployment to my physical Android phone. The server uses an MS-SQL back-end to map files on the file system to user-friendly names and dates, and allow adding and removing entries and files locally.
Every now and then, the SignalR connectino would break between the Browser and the Blazor Server, and I didn't understand why. After completing some Blazer Server training modules, I've learned that a Blazor Lifecycle Method code block is probably throwing an unhandled Exception, and breaking the SignalR connection is the default behavior after such an event.
I'll have to go back to that project and add appropriate Exception Handling. Hooray for continual learning and self improvement!
Update: I've completed the planned Blazor learning modules! On to the next thing!
I've registered for the MS Learn Challenge - Build 2024 Edition and have a plan to get this Plan's Modules knocked out by Tuesday end of day.
Coding and transpiling TS is an interesting adventure, especially when looking at a project other than my own. Seems like there are issues with walking dependecy trees, either by the IDE and/or currently installed Extensions, so there are lots of red squigglies on screen. This is very distracting and I've asked around for help but haven't received any responses so far. I'll push forward anyways.
After reviewing my progress on MSFT Learn modules after several weeks away from them, I discovered some ASP.NET and Blazor modules I had started but not yet completed. Upon completing those I started looking at Blazor as a framework that could help build several projects going forward:
I've restarted practicing DS&A challenges. In the last few weeks I've lost a bit of familiarity in this area due to focusing on other projects.
Quick review of a Singly Linked List with Insert and GetValueAfter methods:
public class LLNode
{
public int? Data {get;set;}
public LLNode? Next {get;set;}
}
public class SinglyLinkedList
{
public LLNode? Head {get; private set;} = null;
public bool IsEmpty => return Head is null;
public SinglyLinkedList(int data)
{
Head = new LLNode(data);
}
public void Insert(int data)
{
if (IsEmpty)
{
Head = new LLNode(data);
}
else
{
LLNode newNode = new(data);
newNode.Next = Head;
Head = newNode;
}
}
public int GetValueAfter(int preceedingData)
{
LLNode? current = Head;
while (current is not null)
{
if (current.Data == preceedingData &&
current.Next is not null)
{
return current.Next.Data;
}
current = current.Next;
}
// If an existing Exception type does not already exist, create one that inherits from Exception
Exception NotFoundException = new("Could not find value in this list.");
throw NotFoundException;
}
}
Quick review of a Stack datastructure:
public class MyStackNode
{
public int Data { get; set; }
public MyStackNode? Next { get; set; }
public MyStackNode(int data)
{
Data = data;
Next = null;
}
}
public class MyStack
{
public MyStackNode? Top { get; set;} // null means empty
public bool IsEmpty => Top is null;
public void Push(int data)
{
if (IsEmpty)
{
Top = new MyStackNode(data);
}
else
{
MyStackNode newNode = new(data);
newNode.Next = Top;
Top = newNode;
}
}
public int Peek()
{
if (IsEmpty)
{
Exception EmptyStackException = new("This stack is Empty.");
throw EmptyStackException;
}
return Top.Data;
}
public int Pop()
{
if (IsEmpty)
{
Exception EmptyStackException = new("This stack is Empty.");
throw EmptyStackException;
}
MyStackNode temp = Top; // possible null value here
int topData = temp.Data
Top = Top.Next; // possible null value here
temp.Next = null;
return topData;
}
}
Sometimes there are surprising features in WPF. For example, implementing Binding Validators on Controls can have the side-effect of the Source property not receiving the data that did not pass validation. I'm sure this is by design and, with a little thought, it can make sense. After a few hours of tracking down a pesky bug in BF-BMX Beta 2, I concluded that the custom validation would not be compatible with updating the on-screen buttons and on-screen status updates. I'll need to look into an alternative means of providing on-screen feedback to the user when they've entered an invalid path.
Another bug that I invented while architecting the file system monitor wrappers is ignoring the difference between the nullable wrapper class, and the nullable FileSystemWatcher class itself:
Dispose()
mthod.IDisposable
, but my wrapper class only wrapped that functionality, rather than integrating it, or (perhaps better yet) doing property null-checks.Going forward, I'll have to refactor the code to either:
Observable
to the ViewModel.At the May 1st meeting, discussion around tweaks and alterations led to a few new features. Implementing them was not too difficult and new builds were produced on 7-May for evaluation.
Some takeaways from this feature-update and debugging work:
System.Diagnostics.Process
: Use Process.Start(args...)
to point to 'Explorer.exe' and the path to open. The default behavior of this namespace feature is to not attach the process to the process that launches it. This means a user could close BFBMX Desktop without also closing the launched Explorer instance.string message = $"{dateTimeStamp:M} at {dateTimeStamp:HH:mm:ss} - {message}";
.SemaphoreSlim
in an async Task
method with a contained Try-Catch-Finally
error handling structure. The down-side of avoiding blocking a thread is that log writes will often be in an unexpected order.Completed MS Learn Web Accessibility Basics. It was focused on ASP.NET webapp but concepts can be applied to any website and some aspects can be applied to Windows App, too.
Key takeaways:
h1
header and use h2
etc headers to help navigate the page.alt
attribute on img
tags to succinctly describe the image that is shown. Images that are simply decorative should have an alt=""
, a blank attribute.div
and complex style
(in-line or otherwise) to make the page "look great" might impede accessibility.header
, main
, footer
, h1
(et al), img
, aside
and others (see MDN Glossary on Semantics in HTML for more details). Using semantic elements ensures accessiblility tools like Narrator and Tab-based navigation are possible. You can always re-style those elements without breaking their hierarchical, semantic, and navigation meanings.input type="button"
or button
elements instead of highly customized div
elements, to maintain navigability and semantic affect.When aren't there?
Working though tweaks discovered during the BF-BMX Beta Launch meeting, as well as bugs found since then, I'm confident to say that the bugs will never end. The question is: Can I address the bugs correctly to minimize the impact of remaining bugs, known and unknown, going forward?
Meanwhile, some key takeaways:
\r\n
and the like, the test might return a false positive (or false negative), which will slow the debugging process.Lessons learned while developing Published App configurations:
dotnet
to publish is handy. It includes --self-contained
, --framework
, --configuration {Release...}
, and --arch {x64}
options, and can be configured to output to a separate folder with --output {dir}
.I attended this very rapid-paced, multi-topical stream of sessions revolving around developing on top of Azure services.
Here are my hastily written notes.
While I had some down time, I took a look at dealing with a few issues with the BF Race Tracker form.
Some succinct takeaways from the 24-April-2024 event I attended (and enjoyed):
/fix
command: Just use the command, don't bother adding additional prompt text!Shift
+ CTRL
+ P
and type Templates
. The Project Template list is the same as those from dotnet new ...
or Visual Studio's New Project
menu item.@workspace
and follow with your question!@workspace where's #selection tested?
to find test(s) related to specific code.Flattening
an Array: To integrate multiple Arrays into one.Throughout this week I've been focused on the BF-BMX project. I'll be meeting soon with at least 1 of the key end users soon to go over current status, find out what needs to be done, and to prioritize that work accordingly. FOr the last several weeks, as I've implemented features and squashed bugs, I've been focused on maintaining a working product between PRs. This has made it possible to work "ahead" of some scheduled work items, yet still be able to "go back" to a previous branch, make progress and/or fix bugs, and still be able to deploy a Debug or Release build for hands-on testing at pretty much any time.
It's surprising how much a project can change, even without specific design instructions to do so. For example, I have built out a custom Linked List to deal with a need for a FIFO-like queue operation with custom features. A standard Queue would not necessarily meet this need. After additional research, it turned out the custom data structure wasn't necessary, so it was removed from the project. This has happened a few times. At least I learn a little each time it happens:
[ObservableProperty]
should be used to wrap an ObservableCollection<T>
to ensure notifications flow to subscribers, such as the UI/WPF.App.Current.Dispatcher.Invoke(Action<T>)
.Can
methods (i.e. CanInitializeMonitor()
) makes the code hard to read and probably slows down execution. It is better to build a group of if-then blocks to return boolean as quickly as possible, so that the calling method (probably an ICommand
type) can execute, and any necessary logging and other processing can happen there.There are a good number of concerns about how to properly parse plain text, especially if it is delimited in multiple ways (i.e. tabbed, comma, and/or spaces). While tab- and comma-delimited are not too difficult to deal with, I explored enabling space-delimited parsing and it became complex very rapidly. If space-delimited parsing is necessary, it will probably end up being a 2- or 3-stage process to ensure random sections of unimportant/unexpected data are not captured as "possibly good data".
I've come to realize that Mocking components of BF-BMX is necessary in order to perform unit testing. It has also become apparent that file access is unavoidable, given the requirements definitions for this solution. So off to reasearch Moq
and start trying to use it! Here are some key takeaways:
var moqThingy = new Mock<IThingy>();
, which allows Moq to create its own instance of the interface object.IThingy
defined a method called SayHello()
that returned a string like "Hello World", Mocking the behavior will look like moqThingy.Setup(inst => inst.SayHello()).Returns("It's aliiiive!");
, overriding the behavior of the Mocked instance method.This Code Magazine article: Using Moq A Simple Guide To Mocking for .NET was helpful.
Since BFBMX is based on incoming data that is relational in nature, and Entity Framework was already added to the core system for future use, I attended an online discussion about Bogus
, a faker.js
spin-off Package for .NET.
In the discussion and demos were some key takeaways, and I feel like Bogus is probably a package I should explore for BFBMX or other project going forward:
Fluent
or the Faker<T>
interface, the "Fake fascade", or defining datasets directly.I took a look at some open source projects that looked interesting to learn, use, and possibly contribute to. A common (and unfortunate) theme a lack of directing members to lead core project activities like managing pipelines/CI-CD, and maintaining release cycles and general project management. On occasion, the situation is related to a parent-project that is going to increment to a new major version, and the child project won't get any updates until after that increment happens to the parent. Another common theme is Issues that are closed (or effectively closed) but still marked as "Needs Help" (or similar), but have not been updated in more than 1 year.
Any or all of these situations make it more difficult to get excited about actually using and becoming a contributing member of the community.
I will plan to revisit Humanizer in a few months, and meanwhile keep my eyes open for other interesting opportunities.
As for my personal OSS projects, it just so happened I needed to set up a Linux environment to work on a second project of mine. This forced me to install and configure WSL on my Surface Pro, and install the latest NVM so I could install the latest Node and NPM, and run the project's Express.js server.
Here are some highlights:
ssh-keygen
) and send the public key to GitHub so that the WSL environment could push code to remote.http origin
is not compatible with an ssh origin
. Not surprising, I had just forgotten about that fact, so it took a minute to recall how to add a new remote
that uses SSH instead, but I got it.
Some more personal OSS experience: I went to explore refactoring some HTML, JS, and CSS website code for a specific purpose. Within 40 minutes I had a (very) simple website up and running with the intended feature functioning. It took a little longer to tweak the feature and determine just how much farther the feature could go (without becoming a lot of work), but this resulted in a go-forward plan and I am excited to see how it comes out.
One requirement I had was to match a string of characters that included either a tab, or a comma with an optional following space.
Example cases: 123, ABC
or 123,ABC
or 123\tABC
.
I came up with a Regex Pattern of \b\d{1-3}(,\s?|\t)\w{1-3}\b
but that would not properly capture all three cases, and it was difficult to understand why not.
After 15 minutes of fiddling with the pattern I asked GitHub Copilot how to build a Regex pattern that would meet a need like "1 to 3 digits followed by either a comma with or without a single space, or a tab, followed by 1 to 3 word characters". GHCP came up with the same pattern and explained (incorrectly, it turns out) how it worked.
So I spent another 20-30 minutes using regex101.com to work out what the problem was, and how to create the correct pattern. Microsoft's Learn documentation on dotnet standard regular expressions has a link to a PDF Cheat Sheet (that I had forgotten about! :wow:) that also came in handy.
Turns out the problem was how the pattern was actually being interpreted, based on how the Alternate Meta Character was being applied |
.
In order to avoid the pattern from evaluating as:
"1 to 3 digits followed by a comma and either an optional space or a tab..."
The incorrect evaluation was corrected by applying the Non-Capture Group Construct (?:...)
to surround the alternate comma or tab argument, and to place the tab character before the alternate character like so:
\b\d{1-3}(?:\t|,\s?)\w{1-3}\b
Lessons learned:
(?:...)
to group items together correctly.|
so that it does not fail-fast and evaluate to the wrong sub-pattern match.I recently completed a LeetCode exercise where the input was an array of signed integers, and the goal was to return the smallest integer that was not in the array. For example, the solution should process an array input of [ 1, 3, 5, 4, -1 ]
and return the integer 2. Additional constraints were included such as O(n) Runtime and O(n) or better storage.
I used the DotNet class SortedDictionary<TKey, TValue>
as a simple and fast storage mechanism. Sorting is helpful when looking for specific values, but writing the correct, efficient sorting algorithm is usually challenging and time consuming. By storing the input data to a SortedDictionary, using 'Value' as the 'Key' and the current value index as the 'TValue' value, it is possible to find missing indices. Since the goal is to find the lowest value missing from the input, it is fairly simple to compare the input indices to the stored KVPs in the SortedDictionary, and as soon as an index is not found, return that index and that is the value that was missing from the input.
// Basic SortedDictionary<K,V> usage for this challenge
int[] inputArr = { 1, 3, 5, 4, -1 };
SortedDictionary<int, int> sortedInput = new();
for(int idx=0; idx < inputArr.Length; idx++)
{
int currentValue = inputArr[idx];
// skip any values that are 0 or less, or greater than the length of the inputArr
if (currentValue < 1 || currentValue > inputArr.Length)
{
// skip to the next iteration to save storage space
continue;
}
// SortedDictionary will throw an Exception if you try to add a KVP that already exists
if (sortedInput.ContainsKey(currentValue) == false)
{
// Add the VALUE of the input as the KEY
sortedInput.Add(currentValue, idx);
}
}
// more code...
Once the SortedDictionary has all of the values greater than 0 but less than the length of the input array, use the SortedDictionary as a lookup table. Start at index 1 (per constraint) to return a value or null. If null, return that value, otherwise iterate to the next value (SortedDictionary TKey) until one is missing. If all values in the SortedDictionary are contiguous, then the return value is one greater than the count of items in the SortedDictionary.
I've purposefully avoided giving too much detail above, other than to demonstrate one possible usage of SortedDictionary<TKey, TValue>
to solve one of many code problems. My solution was not very performant in run time or storage, and it should not be referenced as a basis to solve a similar sounding challenge. Readers are responsible for following code challenge rules which could include not using a resource like this to assist them directly.
Attended a MSFT Reactor session about GitHub and its Certifications.
Azure Samples
GitHub repo: Available examples of code used for Azure features.Now that the Desktop component is about 75% functional, it was time to start integration testing to see how Desktop and Server components are working together. They weren't so some debugging was necessary to fix them. Now they are talking to each other and there are fewer exceptions being thrown, however the API Server isn't logging anything to file other than the Message data and Bib records, so that is the next logical step before continuing integration testing. Having file based logs will help with troubleshooting and verifying functionality from here on out!
In the future I'll need to re-write the logging mechanisms to be portable, rather than tied so closely to the Desktop and API Server projects. For now it is good enough, and having a refactoring exercise to perform in the future won't impact the initial release much (if at all).
The ViewModel code is getting a bit lengthy and difficult to read. This tells me I need to encapsulate some of the state and functionality. Doing so will have to wait until a few more features are completed: A meeting with the stakeholders will be necessary (soon) to ensure the outputs and functionality are going to meet expectations, and to tweak (or reset) expectations that have changed or were otherwise not well understood.
Multi-directory monitoring is functioning in debug sessions, and in systems-test scenarios using actual Winlink Express and running SUT Release Builds.
A presentation has been put together that overviews the system main components and features, introduces how to configure the desktop and server, and discusses the operation and logging aspects. In a future meeting (soon, tbd), the presentation and a demo will be performed, which should help coax inputs on necessary changes and tweaks, prior to the scheduled May 1st Beta release.
Interesting online conference about Java, JVM, and support for Java App development in VS Code, and running Java Apps in Azure!
This is exciting news for the Java community, and for me the onramp to building Java Apps in VS Code is flattened through simplification of Java project setup and other aspects of the software development lifecycle.
Completed some interview preparatory work, including a LeetCode challenge to convert from Roman Numerals to Integers using JavaScript. I've solved a similar problem some time ago using Java, but it took me about 2.5 hrs to diagram, pseudocode, step-through, code, and evaluate its performance.
Some key takeaways solving LeetCode Roman to Integer:
string[idx]
is wrong, instead use string.charAt(idx)
in JavaScript.select(arg) { case argN: ...;}
is wrong (must be a leftover from my very short Visual Basic experience), instead use switch(arg) { case argN: ...; break;}
. Subtle difference that I regularly get confused over which to use.While I was at LeetCode, I took a look at one of my previous submissions and noticed the BigO in Time was very poorly ranked. It took me about 15 minutes to refactor the code to get a better execution time that was closer to 50% of all ranked submissions. Storage space was also average, but the spread of space utilization was so small that it really doesn't matter (i.e. 50MB vs 51 MB is just a rounding error for a C# compiled application size).
After lunch I decided to do another LeetCode challenge. This one was to return the most common prefix characters from an array of strings:
["one", "only", "onlay"]
returns "on"
.string.length <= 200
.Set
to store items and check for non-unique entries by asserting that only 1 item should be in the set if all characters are the same, then I could iterate over the next character.string.substring(startIdx, currentIdx + 1)
of the first string in the input array.""
(and empty string).Set
. I should know by now that a built-in object (and custom types/objects) should contain a size
property, and it is only objects like string
and Array
that implement the length
property (in both JS and C#).Number
object in JavaScript does have a MAX_VALUE
property, but at the time I couldn't recall it. However, it was better to use the given constraint than to look it up.At the end of the week, I sorted out some known issues with BF-BMX and am getting ready to implement additional "Watchers" in the app:
Lock
enabled better sharing.Lastly, I put in some extra effort to prepare for interviewing. I'm tweaking my schedule to get these tasks to be more regular. There have been a few very interesting open positions posted recently that I look forward to researching to learn more and possibly apply for.
Writing log information is an important feature of an App. During development and debugging, it can provide an audit trail of operations happening under the hood so that issues can be traced to the source more easily. When an App is released, an end user can review the logs to help confirm the App is "doing the right thing" or as breadcrumbs to determine the cause of unexpected behavior. In the past I've developed a couple different logging services that were crude and simplistic (they worked fine for very low activity apps), or utilized .NET built-in ILogger functionality to get Console-level logging output. For BF-BMX, it was important to get a more robust and scalable file-logging solution in place for the desktop application.
I took extra time to learn and understand how to create a custom logger in .NET, and here are a few outcomes from getting it going for the first time:
ILogger
-compliant logging utility requires building a custom Logger class that implements ILogger
, a custom provider that implements ILoggerProvider
, and a custom, somewhat abstracted Configuration class.ILoggerProvider
implementation must provide a means of configuring a new custom logger, getting a current configuration, and returning a custom logger (CreateLogger()
) for use when one is called for by the IoC container.IConfiguration
pattern to control logging behavior during IoC Container services setup.StreamWriter
instance to append formatted message data to a file at a location that the custom configuration class can set and App Start-up.This was a difficult thing to implement because:
Where I had to trust .NET to do some work for me once I've set up the classes per the interface requirements:
TState
and Func<T>
in method params, and the definition and use of the custom Configuration object, it became a bit easier.services.Configure<MyLoggerConfiguration>()
was necessary. The param in that method is simply a lambda that sets a config
item that is actually calling the custom Config object.services.AddLogging()
is absolutely required in order to get any of this to work at all.An issue that I knew would come about was Logging from multiple parallel tasks could cause IO Exceptions while attempting to write logging output. At least logging is implemented at a basic level and I can work around parallel IO by redesigning the logging a little bit.
Watched Factory Pattern with Dependency Injection by Tim Corey regarding the Factory Pattern. Some key takeaways:
Extension Methods
that accept the IService
services type and use that to inject the dependencies of the Factory classes. This is not required, but it removes lots of code from the IoC Container i.e. fewer service.AddTransient<TInterface, TImplementation>()
etc.factory method
instead of calling a constructor
.Tim Corey mentioned the following in Factory Pattern with Dependency Injection:
I ran into an issue where a Blazor app was calling JavaScript (through .Net interop classes), and the JS code would call .Net back to update a field in the razor file, but the change would not show on the page. JS was delayed in returning a response, so some asynchronous processes were at play.
Key takeaways:
IJSRuntime
, try removing simple and/or asynchronous code blocks like setTimeout()
in JavaScript to see if everything else is working as expected.Note: Blazor StateHasChanged()
: Notifies Blazor component that bound properties have been updated.
Math.Floor(int number)
) when not completely familiar with it. Doing so could introduce a bug or other unexpected behavior that will be difficult to explain and fix.Console.WriteLine(string message)
will not only slow down the run time of the code, but will also increase the memory usage. Leave these out when submitting a final.I should start finding ways to make these challenges more fun to complete, rather than over-challenging myself by not preparing for them in any way. For example: When I first see a Linked List challenge that I want to work on, I should:
while
code blocks and recursive methods.Completed initial BF-BMX API Server build. All updates are documented in the README. There are some open questions about the output logging formats. During implementation, I knew changes to logging might be necessary so I've made it relatively easy to change the logging while minimizing how much code is touched or affected.
Implemented many tests against the BF-BMX service and API, and started running some simple input-output testing using the Swagger UI.
The BF-BMX user interface is the next step. Leveraging .NET 6, WPF, and the Community Toolkit, my goal is to focus on the functionality of the UI. There are several synchronous and asynchronous processes running under the hood, and these need to work in order for this project output to become useful. Once the functionality has been well tested, style and UI tweaks will be added for an attractive, useable interface.
Attended MS Reactor session about dev productivity, dev flow and artificial intelligence, and other resources and tools to help with developer productivity.
Collection of random thoughts taken while attending Azure Developers JavaScript Day hosted by Microsoft and Microsoft Azure Developers.
What GitHub Copoilot Can Do:
#
, like #file
, #selection
, etc.reload
command in the VSCode Palette to restart it.Spotlight
. It keeps the Palette open while selecting, scrolling, and editing code!What is Retrieval Augmented Generation (RAG)? It is a code pattern used to leverage augmented capabilities of LLMs.
What is LangChain/LangChainJS? Framework for developing Apps using backend LLMs.
How can Copilot be configured to query my custom data?
Related References
Max and Stephan ran a great overview of Playwright!
await expect()
to define a test that asserts what controls are visible.Began reading up on DotNET Foundation project CommunityToolkit MVVM
. I'm a little worried about this project but initial impressions are it is a handy code-generator for things like object observability, notification, commanding, and messaging in WPF (and UWP, Xamarin, and possibly others).
I'll do some experimentation before I decide whether to utilize the CommunityToolkit for BF-BMX.
I had a silly question, wondering if a WPF control could display a Queue of items. To further complicate the question, the queue would be accessed asynchronously by another process to enqueue and dequeue items.
List<T>
.List
members to make it look and act like a Queue makes sense.I came up with a Synchronous solution that involves inheriting from List<T>
and overriding InsertItem()
and RemoveItem()
, and also adding Enqueue(Object)
and Dequeue()
methods for code readability.
First, set up EventArgs for the custom queue:
public class PersonChangedEventArgs : EventArgs
{
public readonly Person ChangedItem;
public readonly ChangeType ChangeType;
public readonly Person? ReplacedWith;
public PersonChangedEventArgs(ChangeType change, Person item, Person? replacement)
{
ChangedItem = item;
ChangeType = change;
ReplacedWith = replacement;
}
}
public enum ChangeType
{
Added,
Removed,
Replaced,
Cleared
};
Next, inherit from ObservableCollection<T>
and insert EventHandlers:
public partial class ObservableQueue : ObservableCollection<Person>
{
public event EventHandler<PersonChangedEventArgs>? Changed;
public List<Person> People { get; } = new List<Person>();
// add an instance to the end of the List
public void Enqueue(Person person)
{
People.Add(person);
base.InsertItem(Count, person);
}
protected override void InsertItem(int index, Person newItem)
{
base.InsertItem(index, newItem);
EventHandler<PersonChangedEventArgs>? temp = Changed;
if (temp != null)
{
temp(this, new PersonChangedEventArgs(ChangeType.Added, newItem, null));
}
}
// remove the first item (lowest indexed) from the List
public void Dequeue()
{
RemoveItem(0);
}
protected override void RemoveItem(int index)
{
Person removedItem = Items[index];
base.RemoveItem(index);
EventHandler<PersonChangedEventArgs>? temp = Changed;
if (temp != null)
{
temp(this, new PersonChangedEventArgs(ChangeType.Removed, removedItem, null));
}
}
}
Then, in the ViewModel, implement the code-generating Attributes:
public partial class MainWindowViewModel : ObservableValidator
{
[ObservableProperty]
private ObservableQueue people = new();
// other observable property fields here like FirstName, LastName, etc
[ObservableProperty]
private string addPersonButtonText = "Add Person To Database";
[ObservableProperty]
private string removePersonButtonText = "Remove Person From Database";
public string FullName => $"{FirstName} {LastName}";
[RelayCommand(CanExecute = nameof(CanSetName))]
public void AddPerson()
{
// instantiate newPerson and other processing, logging, etc code here
People.Enqueue(newPerson); // add to end of the list (highest index)
PeopleCount++;
OnPropertyChanged(nameof(PeopleCount)); // notify change in count
// null-out newPerson and FirstName and LastName fields
}
public bool CanSetName()
{
// if FirstName and LastName have text in them...
AddPersonButtonText = $"Add {Fullname} To Database";
OnPropertyChanged(nameof(AddPersonButtonText));
return true;
// else, log this situation...etc
return false;
}
[RelayCommand(CanExecute = nameof(CanRemovePerson))]
public void RemovePerson()
{
// other processing, logging, etc code here
People.Dequeue(); // first item in the list
PeopleCount--;
OnPropertyChanged(nameof(PeopleCount));
}
public bool CanRemovePerson()
{
RemovePersonButtonText = $"Remove Person from DB ({PeopleCount})";
OnPropertyChanged(nameof(RemovePersonButtonText)); // notify of button text change
// if additional processing is necessary, expand
// the return statement to a full if-then code block
return PeopleCount > 0 ;
}
}
Implementing asynchronous operations would be the next step, and enabling concurrent access to the List will be another hurdle. If the above code is used, added code to enable async and thread safe concurrency will be posted here.
The last few days I have been working on implementing code and tests for the BF-BMX project. I realized there was room for improvement in defining some data details so I'm reaching out to the primary end user to get their preference on what the data should look like. This shouldn't block my progress at all, but might require some refactoring later, depending on what the response is.
There will be several interruptions in the upcoming weeks that will slow project progress, so this next week will be a big push week to get the core of the BF-BMX project ready for building-out and testing functionality. I have time to get this done before Beta testing begins, but I want to stay ahead of the schedule as much as is practical.
Made some good progress the last few days with WPF Input Validation, implementing async functionality, and backup/restore of in-memory data (which was largely completed in week 6).
I'll overview Tosker's Corner demonstrations of using input validation in the next four subsections.
Also check out this response by StackOverflow user MrB for more.
Remember: Updates to properties must include notifications, for example IObservableCollection
, or INotifyPropertyChanged
, etc implementations.
ToskersCorner introduces four ways to accomplish validating input in WPF:
A couple of these actually rely on Validation by Exception behind the scenes, so there is plenty of crossover.
See my notes in Conted: WPF MVVM Learnings.
This is a real rabbit hole, but it is pretty interesting albeit complex. I've written some notes in dotnet async await notes to force my brain to process what Stephen Cleary is saying in his blog post/essay.
Some key takeaways:
Task
or Task<T>
.await Task.WhenAny()
and await Task.WhenAll()
(use the await keyword).await Task.Delay()
instead of sleeping a Thread.ConfigureAwait(false)
, be aware that returning a result to the GUI thread will require additional code, so it is easier to add "fire and forget code" when using ConfigureAwait(false)
.For BF-BMX, I will probably want to look into using AsyncCollection<T>
to manage multiple processes pushing data to a common repository.
I've added notes about TAP and Aynchronous programming patterns in DotNET Aync Await Notes.
The next thing to check out is Data Structures for Parallel Programming at MSFT Learn - I have a feeling this will provide even more insight into patterns that could come in handly when developing BF-BMX.
The other night I had a nightmare that I couldn't depict how to zip Linked Lists on paper. I took that as a sign that I am out of practice with DSA exercises. So I took a quick side-trek to review "Big-O Notation", and will prepare for a more regular review of algorithm and data structure challenges to keep my interviewing brain fresh.
An open-source project supported by the DotNET Foundation, applies MVVM pattern to WPF, iOS, Android, and other platforms. I took a look at MVVM Cross as a possible framework to use in BF-BMX, replacing Caliburn Micro. Here are a few key takeaways:
In the Interleaving section of MSFT Learn article on Task-based Asynchronous Pattern, example code shows how to utilize Task.WhenAny(func)
to download images for display to a UI, as they become available. This will apply nicely to Mob-Wx on the 7-day forecasts page.
Although I was out of town for most of week 5 some software development happened anyway:
dotnet
to build the solution from scratch, and manually add the Library and Unittest projects.string.IndexOf(char)
instead to avoid iterating.Regex.Match
for single-instance searching within a string, and Regex.Matches
for locating multiple instances of a string.Match
and Matches
have helpful properties like Count
and StartingIndex
that are probably more efficient than a for
or foreach
construct.While building the ADIF validator toy, I found myself creating "wrapper methods" to the library methods that actually did the work.
ConsoleLib
class that is only used by a Console UI, or some other library that basically acts as an API but otherwise does not analyze or change any data passed in either direction.Started working on the Bigfoot Bib Message eXtractor project. My current approach to development is:
There are still some questions I need to get answers to (non-exhaustive list):
Exploring ways to get the API Server to utilize a database, item Collection, and logging. Here are a few takeaways:
private readonly ILogger<MyClass> _logger;
, otherwise DI cannot inject it into that Class, and compile will fail.Exploring file monitoring, asynchronous code, and regular expressions. Here are a few takeaways:
Match.Value
and Match.Success
, yet those properties (and some other methods) exist and are listed in an expansive table! .NET 6 API Match Class
This 5-page feature comparison of EF Core and EF6 is probably the best TLDR: MSFT Learn: Compare EF Core and EF6. It really pushes the idea that EF Core is the way to go with new projects. That's fine, but which
EF Core? Turns out there are versions of EF Core that are not supported outside of .NET Core, .NET Framework, and .NET Standard 2.0. That's also fine, but it forces designers and developers of existing products that use Entity Framework to more to EF Core (and cross their fingers) or stay with Entity Framework, which is very stable and reliable at this point. What is EF falls out of support completely, and EF Core doesn't support the features your application (or system) rely on?
Tough questions there. Thankfully, I am not going to worry (much) about using either one, outside of the immediate compatibility and feature requirements my current project needs.
Another sticky point is MSFT touts EF and EF Core as having support for so many database interfaces. While it is true there are multiple caveats and tradeoffs to consider. One example is Sqlite - It is supported, and there are EF/EF Core extensions that provide for integrating Sqlite, but Sqlite itself is less focused on being EF/EF Core compatible (and frankly, Windows-ready it seems). While Sqlite is certainly in use and a good solution for many software shops on Windows, I'm chosing to not use it for this project to avoid headaches with platform and framework compatibility and interoperability.
So, I'm going to settle on EF Core and "In Memory Database" as a simple alternative to relying on only collections, or using EF/EF Core with SQL Server or Sqlite. More likely, I'll look to building a Dapper ORM data layer, as is described by Tim Corey in his YouTube video Data Access with Dapper and SQL - Minimal API Project Part 1 where he is using ASP.NET Core Web API in .NET 6.
I picked up where I left off with an exploratory project back in November 2023. At least 2 unittests were not working properly, and one of those was failing outright. At the time I had not worked out why the failing test was having the problem. Today I was able to sort it out:
ObservableCollection<T>
, avoiding lots of boilerplate event handlers and OnChanged()
coding.IList<T>
field to act as the collection. This is not necessary as the inherited ObservableCollection<T>
manages an internal list, so the field was removed and implementing IList<T>
on the class was no longer necessary.GetByName()
method.After removing the shadowing List, validating the wrapper code functions, and replacing the indexer with a proper Get function, the Collection would behave as expected and unit tests are now passing.
This is great because the code will get folded-in to a larger exploration that will get folded-in to the BF-BMX project (if it all works out).
// one way to find a simple List item by name while inheriting from ObservableCollection<T>
public class MyCollection : ObservableCollection<MyClass>
{
public MyClass GetItemByName(string name)
{
foreach (var thing in this)
{
if (thing.name.Equals(name))
{
return thing;
}
}
// A caller only know about an item that exists in this
// collection so an error here indicates a problem elsewhere
// in the application logic that would need to be dealt with.
throw new KeyNotFoundException($"{name} not found in collection.");
}
}
Implementing ListView with a Template in an MVVM environment is similar to what is described below, except for where in the component tree the data becomes available, and how bindings much be changed to accommodate that change:
BindableProperty
properties are configured in the Template code-behind like before.x:TypeArguments
with the actual ViewModel class that contains the ObservableCollection. A Class
namespace also points to the ViewModel. Another namespace points to the Templates directory, and in a local ResourceDictionary
, an x:Key
reference points to the Template file based on the template reference set in the xmlns declaration. See example XAML below.<ListView.ItemTemplate>
element.<!-- ForecastView.xaml code for MVVM environment, utilizing a ListView with a View Template -->
<?xml version="1.0" encoding="utf-8" ?>
<views:BaseView ...
x:Class="MobWxUI.Views.MyView"
xmlns:views="clr-namespace:MyProject.Views"
xmlns:vm="clr-namespace:MyProject.ViewModels"
x:TypeArguments="vm:MyViewModel"
xmlns:controls="clr-namespace:MyProject.Templates">
<views:BaseView.Resources>
<ResourceDictionary>
<controls:CustomCard x:Key="controls:CustomCard" />
</ResourceDictionary>
</views:BaseView.Resources>
<ListView ItemsSource="{Binding MyCollection}">
<ListView.ItemTemplate>
<DataTemplate>
<ViewCell>
<controls:CustomCard Name="{Binding Name}"
Description="{Binding Description}" >
</controls:CustomCard>
</ViewCell>
</DataTemplate>
</ListView.ItemTemplate>
</ListView>
</views:BaseView>
Note: In my MVVM project, View and ViewModel inherit from abstract partial classes prefixed "Base". The BaseViewModel inherits from ObservableObject, and the BaseView partial class consumes a ViewModel type in the CTOR, and sets the BindingContext
to the ViewModel parameter. This reduces duplicated code in every ViewModel class that is created, but makes it more difficult to realize a BindingContext
does exist in each View.
The next step is styling the ListView items. Because the Bindings are now configured, theoretically all that is needed is to add BindableProperties
for each Style element and then a binding reference to Resources\Styles
. First attempt to configure this showed that the default binding is to the Model class (where the data comes from), so there is more investigation needed to solve this part.
I've been trying to understand how to leverage composition (loosely speaking) in .NET MAUI 8 to display a list of object instances within a scrollable page. In other frameworks I've been able to get this to do the work for me, including:
The high-level problem is the same, and the solution includes composing bits of UI and data to get an iterated output, which improves code reuse and limits boilerplate boringness.
Here is the high level steps to get ListView to display properly in a Content Page view:
get
accessors.ContentView
, not ContentPage
) that contains a Frame that binds the data model properties to Labels and other standard controls, common to each data model instance properties. Store this template in a separate folder such as "ViewTemplates".ContentView
class), create public, static, readonly BindableProperty
properties - one for each data model property. Avoid naming conflicts.PageView.xaml
and ensure it has <ContentPage.Resources>
referencing the View Template (in this case "CardView") that will actually display the data, and also defines an x:Class
that points to itself (I assume this is to ensure a reference to the collection and binding context that will be set in the next 2 steps).get
accessor.BindingContext
to this
.Code samples to follow:
// DATA MODEL with get accessors
public class Language
{
private string _title = string.Empty;
public string Title
{
get { return _title; }
set { _title = value; }
}
// ...more properties...
// add customized colors or other styles if you really want to:
private string _cardColor = "Azure";
public string CardColor
{
get { return _cardColor; }
set { _cardColor = value; }
}
}
<!-- The "View Template" named "CardView" in this project -->
<?xml version="1.0" encoding="utf-8" ?>
<ContentView ...
x:Class="MyProject.ViewTemplates.CardView"
x:Name="this">
<Frame BackgroundColor="{Binding CardColor}"
BorderColor="{Binding BorderColor}">
<Grid RowDefinitions="Auto,Auto,Auto"
ColumnDefinitions="*">
<Frame BorderColor="{Binding BorderColor}"
Grid.Row="0">
<Label Text="{Binding Title}"/>
</Frame>
<Label Text="{Binding Name}"
Grid.Row="1"/>
<BoxView BackgroundColor="{Binding BorderColor}"
Grid.Row="2"/>
<Label Text="{Binding Description}"
Grid.Row="3"/>
</Grid>
</Frame>
</ContentView>
// View Template Code-Behind
public static readonly BindableProperty TitleProperty =
BindableProperty.Create(nameof(Title),
typeof(string),
typeof(CardView),
string.Empty);
public string Title
{
get => (string)GetValue(CardView.TitleProperty);
set => SetValue(CardView.TitleProperty, value);
}
// ... more BindableProperty properties here ...
public static readonly BindableProperty CardColorProperty =
BindableProperty.Create(nameof(CardColor),
typeof(string),
typeof(CardView),
string.Empty);
public string CardColor
{
get => (string)GetValue(CardView.CardColorProperty);
set => SetValue(CardView.CardColorProperty, value);
}
// CTOR
public ViewTemplate()
{
InitializeComponent();
}
<!-- Content Page "PageView.xaml" -->
<?xml version="1.0" encoding="utf-8" ?>
<ContentPage ...
x:Class="MyProject.Views.MyContentPage"
xmlns:controls="clr-namespace:MyProject.ViewTemplates"
xmlns:views="clr-namespace:MyProject.Views"
Title="MyContentPage">
<ContentPage.Resources>
<controls:CardView x:Key="controls:CardView" />
</ContentPage.Resources>
<ListView ItemsSource="{Binding Languages}">
<ListView.ItemTemplate>
<DataTemplate>
<ViewCell>
<controls:CardView />
</ViewCell>
</DataTemplate>
</ListView.ItemTemplate>
</ListView>
</ContentPage>
// "View Template" code-behind
private ObservableCollection<Language> _languages = [];
public ObservableCollection<Language> Languages
{
get { return _languages; }
set { _languages = value; }
}
public MyContentPage()
{
InitializeComponent();
// this could be a REST/JSON result object or database query result, etc
// so long as it is an ObservableCollection<T>
Languages = new ObservableCollection<Language>(
[
new Language { Name = "C#", Title = "C Sharp", Description = "The primary programming language that is used to develop apps for the Microsoft platform." },
new Language { Name = "F#", Title = "F Sharp", Description = "Declarative-function, object-oriented, language for .NET apps." },
// more entries...
]);
this.BindingContext = this;
}
Some key ListView takeaways:
BindingContext
set to the corresponding item in the data source, therefore only the properties of the item need specific bindings.It seems that ListView is less-desireable to CollectionView. Performance and customizability were cited in the MAUI documentation as the reasons. I've moved the Forecast page of the Weather app over to CollectionView and it works great in Windows and in Android debug builds. Release builds are a problem though - the data did not show without jumping through a few hoops:
Styles.xaml
, so it would be considered in the merged resources algorithm, and Style IDs could be found.Now the Forecast page shows data in Android Release builds, including on a physical device!
Some references:
Note: The display problem was the same in my environment, but I believe the cause was different: In my case, the compiler was probably expecting Styles.xaml to exist alongside the Template xaml, or in the View xaml.
There was a period where the Android Release version of MobWxApp wouldn't display the 7-day forecast data, and it wasn't apparent what the cause was. Also, since I assumed that a Release Build and Debug Build would be similar enough, testing in Debug mode would be enough. I was wrong, and here is what was going on:
Label
elements with the Label.ForemattedText
attached property, formatting the string text and bound string data within Span
elements nested in a FormattedString
element.Style
attributes that were bound to Styles.xaml seemed to clear up the problem and both Debug and Release builds no longer had the errors and the Forecasts view would work again.So, what is the problem here?
Span
elements indeed not supported within Label
elements?Styles.xaml
?CollectionView
?Debug mode compiles differently than Release mode (obvious, right?). Release build doesn't provide all the feedback that Debug mode does, most notably Breakpoints and Debug log output. Therefore, when developing XAML layouts, content handling, and style application, use Debug build for quick testing, then before moving on, do the following:
The solution to the problem of syling Span
elements within a parent Label
is to:
Span
itself, whether through in-line Style, or through Binding.Span
supported properties are applied (and not Label properties).So for example instead of:
<!-- Template View, within a Layout, with ResourceDictionary pointing to Styles.xaml -->
<Label LineBreakMode="NoWrap" Style="{Binding LabelStyle}">
<Label.FormattedText>
<FormattedString>
<Span Text="Hello "
Style="{StaticResource LabelStyle}" />
<Span Text="{Binding World}"
Style="{StaticResource LabelStyle}"
/>
</FormattedString>
</Label.FormattedText>
</Label>
<!-- Styles.xaml showing only the SPAN and LABEL element Style definitions -->
<Style TargetType="Label" x:Key="LabelStyle">
<Setter Property="VisualStateManager.VisualStateGroups">
<!-- defined visual state groups that SPAN does not support -->
</Setter>
</Style>
...add a Span-specific styling and avoid relying on Label Styling, like this:
<!-- Template View, within a Layout, with ResourceDictionary pointing to Styles.xaml -->
<Label LineBreakMode="NoWrap">
<Label.FormattedText>
<FormattedString>
<Span Text="Hello "
Style="{StaticResource SpanForecastItem}" />
<Span Text="{Binding World}"
Style="{StaticResource SpanForecastItem}"
/>
</FormattedString>
</Label.FormattedText>
</Label>
<!-- Styles.xaml showing only the SPAN and LABEL element Style definitions -->
<Style TargetType="Label">
<Setter Property="VisualStateManager.VisualStateGroups">
<!-- defined visual state groups that SPAN does not support -->
</Setter>
</Style>
<Style TargetType="Span" x:Key="SpanStyle">
<Setter Property="FontSize" Value="14" />
<!-- more SPAN specific setters here -->
</Style>
Elements Span
and Label
do not share Styling properties, despite there being some overlap, so explicit bindings are required even through Debug Build will ignore the error, but Release Build and an actual Android platform deployment might not.
That completes the Forecast page style fix-up for the app. Next steps include:
Watched a MSFT Reactor presentation today on continous integration (CI) with LLMs and AI Models. There were two guests with the host, and one of them mentioned Vector Databases and briefly described it.
Here are my notes about vector database and MSFT's Semantic Kernel.
Also see About Machine Learning for somewhat related notes from a previous MSFT Reactor session.
git init
: Start up a new git-tracked folder. Some framework initializers do this for you, others do not.git status
: Posh-git has the ability to show status in-line with the prompt, but it lacks details like which files have been added, modified, or deleted.git log
: Answers the question "where am I in the commit history?" by reviewing (in reverse-time order) commits with their comments. Good git comments will help tell a story about the state of the code.git diff {filename}
: Shows the added or removed code difference between the previous commit and the currently saved (and uncommitted) changes. Answers the question "why is that file in the modified list?git commit {filename} {comment}
: Can also use .
in place of {filename}
to include all new/modified/deleted files in stage or unstaged states. Comment size is not strictly limited but I stick with fewer than 50 characters so that the commit message is not cut-off in a code history window like GitHub's 'code' view. Use a line-feed (LF) or carriage return (CR) and LF to add comments beyond a 50 character 'title'.git push {target} {branch}
: Pushing commits to a remote, or another branch. This is really a git merge
command with the modification of a remote branch as the target, rather than the current (and local) branch.git pull {target} {branch}
: Takes commits from named target's named branch (if it exists) and attempts to 'git merge' them to the current branch. This is the opposite of git push
and is also based on git merge
so use it accordingly. On occasion it can be helpful to merge from one branch to another, say for example to incorporate a remote development branch into a local development branch.Other Git commands I rarely use:
git stage {filename}
: De-staging files is simpler and less risky than undoing a commit. Use stage to prepare to commit, then execute the project or solution and verify it works (tests pass, etc), before committing the staged files.git rebase {branchname}
: Every branch has a 'base commit' that is the beginning of that branch history. Aside from the 'main' branch, every other branch will have a 'base commit' that most likely is not the first commit that initialized the git repository. When working on a team, it can be helpful to rebase your own branch to a commit that is the latest commit on a working or main branch (or some other dev branch). This ensures the current branch has existing committed and approved necessary changes in it, which can help reduce pull-request merge conflicts. One downside is it alters the history of the current branch so it appears to derive from some later commit than it did originally. Some teams/organizations do not allow rebasing.git merge {otherbranch}
: Takes commits from 'otherbranch' and attempts to merge them into the current branch. There are options here that effect how the merge is performed (fast-forward, squash, rebase, etc) that should be reviewed before using. This is useful when working in a local development branch and changes from another development branch are needed in the local branch. It can get confusing very fast when merging branches like this, and the git history (see 'git log') can also be harder to follow. Merging to a remote branch is also possible, and in fact git push {target} {branchname}
is a merge operation for all intents and purposes.git merge {namedcommit}
: Same as git merge {otherbranch}
but specifies a commit name rather than a branch-name label. Be sure to review the git-merge help files for additional information.Note: The posh-git repository is somewhat stale (2-years since last update/fix/response). This could mean it could fall out of compliance with newer PowerShell releases (currently I'm using 7.4.1).
Also note: After installing git (I usually select GIT-SCM latest), access the help files in the installation directory ./share/doc/git-doc/
, or by typing git help
for an overview of commands, and git help {topic}
for a rich (html) manual.
I completed sorting out the issues with navigation in the mobile weather app. Also, the NWS managed to fix their 'Points Forecast' endpoint, but it has not been reliable, so occasionally there are REST results codes 404 and 5xx that my app will need to be better an handling.
There is more work to do, but the build is functional, publishing an APK works, and running on Windows and Android (both emulated and side-loaded) function without errors now.
The current version with navigation bug fixes is merged into main now and an updated side-loadable APK has been published privately.
As I have worked through using the NWS public API over the last two months, I've been learning how to better deal with user inputs, and less-expected (or unexpected) API responses.
An few takeaways:
Working through implementing a usable About page for MobWxApp:
<span>
s.<span>
elements, and can be worked around by using <Label>
instead. Below is an example of the problem XAML and the work-around XAML.Launcher
class, instead of an IBrowser implementation (see C# code, below).<!-- from .NET MAUI 8 documentation at learn.microsoft.com -->
<Label>
<Label.FormattedText>
<FormattedString>
<Span Text="Alternatively, click " />
<Span Text="here"
TextColor="Blue"
TextDecorations="Underline">
<Span.GestureRecognizers>
<TapGestureRecognizer Command="{Binding TapCommand}"
CommandParameter="https://learn.microsoft.com/dotnet/maui/" />
</Span.GestureRecognizers>
</Span>
<Span Text=" to view .NET MAUI documentation." />
</FormattedString>
</Label.FormattedText>
</Label>
<!-- Avoid using SPAN elements -->
<Label Text=".NET MAUI Project Documentation"
TextColor="DarkBlue"
TextDecorations="Underline"
VerticalAlignment="Center"
>
<Label.GestureRecognizers>
<TapGestureRecognizer Command="{Binding TapCommand}"
CommandParameter="https://learn.microsoft.com/dotnet/maui/" />
</Label.GestureRecognizers>
</Label>
The problematic C# Code uses Launcher.OpenAsync(uri)
to navigate to a page:
using System.Windows.Input;
public partial class MainPage : ContentPage
{
// Launcher.OpenAsync is provided by Essentials.
public ICommand TapCommand => new Command<string>(async (url) => await Launcher.OpenAsync(url));
public MainPage()
{
InitializeComponent();
BindingContext = this;
}
}
...what is really necessary for an external hyperlink is a Browser method to call the uri using the 'System Preferred' web browser:
// note: this could be done using ICommand but my implementation uses
// the MVVM CommunityToolkit so I went with IAsyncRelayCommand instead
public partial class Mainpage : ContentPage
{
public IAsyncRelayCommand<string> TapCommand =>
new AsyncRelayCommand<string>(
async (url) => await BrowserOpen(url)
);
}
...
private async Task BrowserOpen(string url) {
// check for null/whitespace string and open a try-catch block, then:
try
{
Uri uri = new Uri(url);
bool result = await Browser.Default.OpenAsync(uri, BrowserLaunchMode.SystemPreferred);
}
catch (Exception ex)
{
// handle, notify, etc
}
}
Theming In Particular:
AppThemeBinding
s everywhere once it is used somewhere on a View, otherwise there can be some unexpected results.transparent
color type when necessary to allow the Theme application to a parent View/Control to show through.Link-Like Label Styling:
The code I implemented for launching the browser and displaying a "link"-like Label are functional on Windows and Android (emulator API 32+).
Note: IBrowser.OpenAsync()
documentation does not mention any Exception type that might get thrown.
Custom Images and Icons:
Miro is really helpful creating materials for images and icons. Some things to keep in mind when creating materials for .NET MAUI 8:
Shell.FlyoutIcon
doesn't seem to accept a FlyoutItem
as an acceptable input.Project\Resources\Images\
, they will automatically be assiged the Build property MauiImage
.<ItemGroup>
entries with both include
and remove
attributes. Clean-up the entries with remove
attribute before the next build-deploy cycle to avoid some possible deployment errors.So many times I've done this and yet the process is just un-obvious enough that I stumble through it pretty much everytime. The goal here is to document it so that I no longer need to look it up.
Rebuild
on the Solution.Release
in the Solution Configuration.Publish
on the Project to deploy. If there is already a Publish Configuration, a build cycle will execute, otherwise the configuration must be set first.Distribute...
and click it to open the 'Distribute - Select Channel' window.Ad Hoc
.Save As
to save the APK. Note: If there is already in an APK in that folder be certain to overwrite it otherwise the new deployment will not complete successfully.Overwrite file?
and then enter the secure password.Open Distribution
at the bottom of the Archive Manager window to gain access to an APK file that can be side-loaded onto an appropriate Android API Level phone.Note: Select Open Folder
to see the signed-apks folder, archive.xml, and deployable APK file.
Areas where I've been struggling with JavaScript recently: Arrow Functions!
// Functional "class"
const MyThing = function() {
this.kvpStore = {};
this.has = (key) => {
return this.kvpStore.hasOwnProperty(key);
};
}
I need to sort this out in my head so it is less frustrating next time:
this.has = (param) => {}
here, instead of this.has = function(param) {}
? The problem is they don't have their own meaning of this
, resulting in unexpected results. So method definitions should not use this syntax. Normal methods should be written using class 'method' syntax (see below). [MDN JavaScript Reference]
// anonymous function
(function (num) {
return num / 100;
});
// basic arrow function removes keyword 'function' and parens and braces not necessary for one-line code block and single (simple) params
num => num / 100;
// braces and 'return' keyword required for multi-line code blocks
num => {
const temp = num / 100;
return temp + 100;
};
const func = () => { foo: 'baz' };
const func = () => ({ foo: 'baz' });
const func = () => { foo: function () {...} };
const func = () => { foo() {...} };
arguments
binding in arrow functions.prototype
property, and will throw an error when called with the new
keyword.Note: The above examples are slightly modified versions from [MDN Javascript Reference], accessed 5-Jan-24.
// class method syntax example with public function definitions
const obj = {
foo() {
return 'bar';
},
};
// the slightly longer form of the above:
const obj = {
foo: function () {
return 'bar';
},
};
JavaScript Delete Operator
This is an odd one! Delete operator allows removing a property from an Object. Identify the object and property to perform the removal.
var HashTable = function() {
this.collection = {}; // a key-value pair storage i.e. [hashcode, value]
this.remove = (key) => {
delete this.collection[key];
}
// add, has, and other functions...
}
Return to Root README