LeetCode: Add Two Numbers and Using Static Blocks to Help print Recursive Solutions

I ran into a problem doing a LeetCode challenge in Java the other day. I could see that the solution was best solved with recursion, but I wasn’t exactly sure how to print the solution as the recursive method was executing. I remembered that using a static block in a Java class makes it so that any time a class is loaded in the JVM, the code inside that block will execute. Because LeetCode uses a Solution class for their problem templates, I simply added the code inside this block to initialize the first part of the solution’s output. I then continued the output with every iteration of the recursion. Below is the code I used:

This quick-and-easy trick is great for printing out the results of recursive methods written in Java as they are executing in LeetCode. Instead of returning a result from a recursive method and using that to print something to the console, we use a static block to begin the process of printing something to the console before the first recursive step even begins to execute, and we continue to print with every iteration of the recursion, finally terminating the output with the base case of the recursive method.

CSCI 462: Meeting Charleston

For this assignment, I attended the alumni symposium, where former students of the Department of Computer Science speak about their experiences getting a job and working in the field. Everyone who came seemed to be glad that they were there, and I enjoyed hearing about all their experiences. Most importantly, everyone did a great job of putting my mind at ease! Everyone’s experience was different, but all of them wanted to express the idea that no company expects you to be contributing something super valuable on the first day. They also described their experiences going through training, with Megan in particular noting the importance of pair programming in getting her up to speed on her job. Knowing that this kind of assistance is out there, as I said, puts my mind at ease.

Eventually, the discussion came up of changing the curriculum for a degree in computer science to be more geared towards practical skills as opposed to theoretical knowledge. Not surprisingly, when the question came up, everyone said that they wouldn’t change a thing about the curriculum at the College of Charleston since the Department Chair himself was in the Zoom call! Still, I found it interesting that each of them expanded upon this further and said that the curriculum prepared them well. While some did concede that there could be a few more classes that would be beneficial, none of them would say that they would change anything significant about the degree. Personally, I believe that understanding theory can help you to better solve particular kinds of problems. If you can think through the theory, you can probably think through a problem where that theory can be applied. Likewise, learning theory can help you grasp the true value of some of the practical tools you will use out in the field. This enables you to better engage with those tools and have a better understanding of what they can and can’t do.

Overall, I’m glad that I attended the alumni symposium. I received some great information about starting one’s career in computer science, had my mind put at ease, and got to catch up with some people I knew from class.

CSCI 462: Reflections on Chapter 9

This chapter of Allen Tucker’s Client-Centered Software Development got me thinking about a lot of aspects of software deployment that I’d never really considered before. The dynamic of working with the client and the hosting service to try to negotiate the best means of integrating the software into the client’s website or web service was the first thing I came across (also the first topic of the chapter) that I hadn’t thought about. One thing I’m not too sure of is how, when using the “hands-off” approach in which the code base is physically on the server, organizations such as the NPFI actually update the code base themselves as opposed to relying on the hosting service. Perhaps this is done through a series of requests to the software running on the server?

Another element to deployment I hadn’t thought about before is providing support for different distributions of the software, assuming it was designed for this purpose in mind. I could see how managing support for multiple distributions could cause a headache, thus explaining why entities such as the NPFI are so widely relied upon. To fix bugs in these software distributions, which may pop up all the time or only intermittently, one would need someone dedicating a decent amount of time to the software either way to be effective. Say for example I don’t touch a piece of software for months and someone comes to me with a bug in a new distribution that I now have to understand and fix for the client to use the software. Then, my task is made infinitely more difficult by the fact that I probably forgot many of the finer details for how the software was supposed to work in the first place (i.e the distribution I worked on or was most familiar with). That’s why building a community and giving an incentive to work on a CO-FOSS project is so important, and in practice, I’m sure that not many CO-FOSS projects rely on just one or a handful of developers to fix all of the bugs that appear in the future. We have now all had experience with the management and oversight needed integrate changes to a piece of software made by the community, and this chapter has given some good insight into just why those practices can be so valuable.

CSCI 462: Reflections on Chapter 6

Chapter 6 of Allen Tucker’s Client-Centered Software Development discusses the importance and usage of databases to software development. He begins by giving a brief explanation of the different kinds of databases and then introduces the reader to SQL and its different implementations. The rest of the chapter is essentially spent explaining the functionality and syntax of SQL. This, of course, was good for me because I admittedly haven’t worked with SQL since last summer when I was making my web scraping application for professional esports players’ game stats. One thing that was brand new which I came across was how concurrency controls are managed by different DBMS’s. I was unaware of row and table level locking, although something tells me we may have touched on this in databases and I just forgot. I definitely hadn’t heard of MVCC though, where the table, in order to take multiple queries at once, simply gives access to a “snapshot” of the table from the past. That way, any access of the database granted would not be influenced by other modifications within a specified time interval.

For our contributions to Open Library, we haven’t worked with the backend or databases, but the JavaScript we’ve worked with does use data retrieved from the databases. A lot of my time to my most recent commit was spent on figuring out (and receiving help with) how to get data from the database into the JavaScript I was moving away from HTML pages. Basically, the inline JavaScript had before been passed from the server to the user with the data from the server already inserted into the logic of the program. After the changes I made, the HTML is passed to the user with the needed data from the server and then the JavaScript, passed to a user in a separate file, parses the DOM to look for special attributes using jQuery to find the server data needed for the frontend.

CSCI 462: Reflections on Chapter 5

In chapter 6 of Client-Centered Software Development, Allen Tucker discusses how to develop domain classes (either from scratch or from an earlier, similar CO-FOSS project), make unit tests to ensure these classes function correctly, and use testing frameworks to easily and efficiently execute these unit tests (along with use case tests and perhaps even UI tests), and finally, how to debug the code to make these tests pass and have a fully-functioning piece of software. Tucker’s discussion of domain classes includes concepts such as inheritance, code reuse, and easily understandable abstractions for programmers to translate requirements into actual code. All of these concepts we have touched on in other classes, particularly our software engineering course that most seniors take the semester before our capstone. However, it is nice to see how these concepts are described in the context of CO-FOSS, because it allows us to better see how what we have all done so far for our own projects has included at least one, if not all, of these concepts and how they truly are vital to the software development process. Likewise, Tucker describes the importance of test-driven development, or TDD, and how it is a process carried on by many programmers at once who at times work alone but eventually have to combine their efforts to perform what is called integration testing. Naturally, given that everyone in our class is working with a code base that is already well-developed, much of the testing we have done for our changes has likely involved integration testing. Finally, Tucker briefly describes debugging and refactoring, two processes which are vital to the continued success of a CO-FOSS project. Good debugging tools and skills help developers more efficiently contribute to CO-FOSS projects, and likewise, refactoring code can help both new developers and project veterans better understand the code they work with (everyone has written code and forgotten how it works after not touching it for awhile), in turn aiding developers in their quest to add new features to their project without introducing bugs.

In fact, most of the work our group has done on Open Library has involved refactoring. While we are still looking for an opportunity to add a new feature to the website, there is much to be done in the way of refactoring as the creators of Open Library have had many issues with their inline JavaScript introducing hard-to-find, hard-to-fix bugs when they make changes to their web templates. Reading this chapter has helped validate our contributions to the project so far. The importance of good refactoring cannot be understated, and I hope that in so doing I have not introduced what Tucker calls “smelly code” by making my own changes to the structure of the code by moving JavaScript away from HTML files and to their own separate JavaScript files. Regardless, as I have learned refactoring can sometimes be hard, especially when you’re new to the language being used. Much of the unit testing in Open Library deals with the backend, so the changes we made to the JavaScript had to be tested by using the UI ourselves by testing the website on a locally-hosted server. As Tucker mentions, testing the UI can sometimes be tricky, and hence, he devotes an entire chapter to it! Thankfully, browsers have debugging tools that make this process easier and ultimately helped me personally solve an issue I got stuck on for many hours. Overall, I’m glad to see through reading Tucker and through integrating our changes into the code base that the work we’ve done so far is in fact routing for CO-FOSS projects and can indeed be very helpful for a project’s developers going forward.

CSCI 462: Stupid or Solid?

William Durand’s article gives an overview of how to avoid STUPID code, or code with singletons, tight coupling, untestability, premature optimization, indescriptive naming, or duplicate code. I actually forgot what the singleton design pattern was before reading this, but when I looked it up again I understood why one might wish to avoid it. Likewise, reading this article which is linked on Durand’s webpage helped me understand better. It appears that avoiding the singleton design pattern is more about avoiding tight coupling. Tight coupling is something I hadn’t thought about before, and it makes sense why a programmer would want to avoid it as it clearly makes code harder to test and understand. One thing I didn’t expect to see though was premature optimization, although it made sense after I thought about all of the programmers on stack overflow telling newbies to avoid optimizing because the compiler does most of it for you. Instead, as Durand implies, make sure the code is readable and that there aren’t any duplicate sections, which sort of seems to eb of the cornerstone for all of the principles of SOLID code…

CSCI 462: What’s Happening?

For this post, I am going to reflect on my reading of “Open Source Data Collection in the Developing World” from Computer, volume 42 (2009). The article discusses the efficacy of Open Data Kit (ODK) as a means to use mobile devices in developing countries for data collection. Data collection, they argue, enables local officials, humanitarian organizations, and citizens to make well-informed decisions. The authors cite the inefficiencies of earlier tools, claiming that they lacked the means to acquire “essential data” from the user’s device and that developers could not easily adapt these tools to collect forms of data other than text. As such, they aimed to develop ODK to address some of these issues.

As of 2009, ODK consisted of three main components: ODK Collect, ODK Aggregate, and ODK Manage, all of which function within a system designed to allow users to configure ODK to meet specific needs. As such, ODK relies on the development paradigms of component-based software engineering as well as configuration application systems. To illustrate the efficacy of the ODK software system, the authors highlight its usage in the AMPATH program in Kenya. Simply put, ODK allowed AMPATH to conduct Home-Based Counseling and Testing (HCT) for HIV with much greater ease. Not only did ODK allow doctors to carry only one device to people’s homes (an Android device), but it allowed them to skip the step of entering their collected data into Microsoft Office for upload to the OpenMRS medical system.

To my surprise, ODK is still used today. According to their GitHub page, it is currently used by the “World Health Organization, Red Cross, Carter Center, Google, and many more…” I find it interesting, although in hindsight expected that I happened to come across this article. I was able, by chance, to find the initial point of deployment and analysis for a software I was unaware of despite it being long-lasting and far-reaching. Likewise, the GitHub repositories for ODK’s different components appear to be quite active with many contributors; people are still contributing and trying to make the software better 12 years after it was initially released. ODK then serves as a testament to the potential long-lasting efficacy of FOSS.

This Bugs Me

For our senior capstone, Clifton, Stefan, Brett and I will be working on Open Library, the free online library provided by the Internet Archive. Below are exercises from “Teaching Open Source” that begin on this page and which I will complete using Open Library’s GitHub repository.

6.4: Find the Oldest Bug

The oldest bug appears to be an issue with the search engine for open library (called solr) returning duplicates of subject pages upon the search of some particular subjects. Admittedly, I don’t think I have enough experience working with the software yet to really understand what exactly is going on here. However, given that this problem hasn’t been solved, I would love to jump on it and attempt to solve the oldest known (unsolved and recorded) bug for Open Library.

6.5: Create your Bug Tracker Account

Open Library doesn’t appear to have any formal bug tracker other than GitHub’s “Issues” feature for repositories. As such, I already have a GitHub account and so have already completed this step of getting involved with the project.

6.6: The Anatomy of a Good Bug Report

I will reproduce a bug we are currently working on as a team. Below is a snapshot of the bug on GitHub:

Screen Shot 2021-01-25 at 8 54 52 PM

As you can see, the image cover for poor Winnie the Pooh getting a checkup is stretched. Here is the issue reproduced on my local build, sadly very blurry:

6.7.1: Bug Triage (Triage five bugs)

I’m not quite sure how I am supposed to answer this one as I am still new to Open Libary’s codebase and as such cannot make a good effort to triage the bugs that need it right now. Likewise, I don’t appear to have the authority to triage the bugs. Perhaps I will come back to this one in the future…

Reflections on Open Source in Today’s World

For this assignment, we were assigned the task of finding two articles on opensource.com and summarizing our thoughts. The two articles I chose are “Convert your Windows install into a VM on Linux” by David Both and “Explore binaries using this full-featured Linux tool” by Gaurave Kamathe. I tried to pick two things that are different to I could learn something new in each article I read.

Both’s article was interesting because I had never thought about how someone might need to install Windows as a VM on Linux to use a Windows-only application in just the same way we have been asked to do the opposite for our classes. The author is unashamedly anti-Windows; it seems he wrote the article to satisfy the needs of anyone unfortunate enough to have been in his position, though he does describe the process quite clearly. In four parts, one must do the following: backup Windows on an external hard drive, reboot the computer with Linux on a live USB, configure the VM on Virtual Box (which is a little more involved than normal), and gain access to the Windows VM (if you do not already have it) by buying a new license. I will definitely be saving this article, as I may have to do this at some point.

Touching on an entirely different topic, Kamathe in his article attempts to advocate for a binary analysis tool called Radare2, which he claims comes packaged with a lot of useful tools in once place. Kamathe takes the reader through installing Radare2 and how to use its basic features. I was amazed at all it could do, and there’s still a lot of information in the article that I don’t understand. In other words, it appears one can do a lot more than I can tell just with my current understanding. My most favorite feature of all is the string analysis Radare2 can perform on binaries because it offers the user an easy way to see where potential points of interest are for further analysis. Reading Kamathe’s article has made me want to work with this tool, although I’m not sure exactly how yet. It would be interesting to do binary analysis of gaming applications to see how the game on my machine interacts with remote servers during online play, as I’m interested in statistical analysis of performance competitive gaming. However, I’m not entirely sure this is even possible, and it would certainly be breaking TOS.

Overall, both of these articles were very informative and helpful, and there are many more on opensource.com that I would love to delve into. The authors’ experience and willingness to tell others what they know is respectable and invaluable for an aspiring software engineer like myself.

Reflections on FOSS

Over the course of my team (QuadSquad with Clifton, Brett, and Stefan) explorations and readings, a consistent theme surrounding open-source development and free software is its ability to scale. As author Eric S. Raymond of “The Cathedral and the Bazaar” explains, the assertion made by Fred Brooks in his Mythical Man Month known as Brooks’ Law, which postulates that the more developers hired or tasked to finish a project, the later that project will ultimately be due to rising cost of complexity within the project development cycle. Now, I think there are cases where this postulation still holds true. Whereas open-source development is centered around system design that is inherently accepting of new and even incomplete features, the world Brooks’ was discussing in his work, and which the Raymond touches a bit on, is one in which software developers usually vetted all the requirements of a system and created the entire design in advance of the coding phase. In some cases this kind of development is necessary (safety critical systems, for example), and in these cases I could see how it may actually be a bad thing for too many developers to tackle a single software project. One of the dimensions of open-source software that makes it so great for developers, Raymond and others (Richard Stallman, Linus Torvalds) have argued is the freedom it provides them and users. Rather than throwing endless developers at a finite set of problems over time, let them discover the problems themselves and some incredible solutions may appear. This is essentially the argument behind FOSS, and it can be more broadly applied to agile development as a whole, even in proprietary settings. However, I think it is important to note there are cases where FOSS is just not a viable option for a software project, such as a video game (because obviously one cannot have one’s game distributed freely by users) or a safety critical system, where emergent properties of the system could cause bugs that are potentially devastating to users. Raymond argues that the more eyes you have looking at bugs, the faster those bugs will be solved. Indeed, but surely there is a point of diminishing returns? And given we have established (in my past software development course and here) that in some cases is it not only necessary to keep software proprietary but also have a serial method of development, I could see how having too many people try to tackle a single bug or a single feature may slow things down. Perhaps most of the benefit in bug-fixing reaped by FOSS comes from the wide array of different features available for FOSS developers to work on, therefore distributing the search and repair of bugs across the entire code base more equally, not having too many developers try to tackle a single problem at once.