You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We began this term struggling with our old idea of ConvoCode...
It was initially a voice to code based smart IDE, with the idea being a use case of either accessibility or educational
Ex. It could help cut down on typing time for software engineers with repetitive stress disorders like carpal tunnel syndrome
Ex. It could be a fun way for those new to coding to explore coding (for loops, if else, things you would find in CS 001 assignments) without having to worry about syntax
We knew that Serenade (a voice to code plug-in for VSCode) existed, but we thought we could use voice to code technology in a different way that was more interesting and creative, using more general natural language as input prompts
We worked on building our first prototype, but by the end of fall at Technigala, even though we were happy with how it turned out in terms of functionality, we weren't getting the feedback we wanted
People thought it was cool but just didn't see the vision in terms of use case or understand where else we could take this project/how we could take it to a new level
It was not straightforward enough for beginners to code with, yet the model was far to creativity (and a bit too slow) for an engineer to want to use, and there was nothing keeping users coming back
We finally realized and accepted that it was too much of novelty idea without a practical enough use case
Coming back from Winterim, we extremely discouraged by the clouds in our vision...
We had a few multiple-hour long meetings to discuss EVERY possible direction that we could take this project, but still were unsure which course was best
The question we were trying to answer was this: how could we base our product around and leverage the creativity and power of the OpenAI models to answer increasingly complex/novel questions, as we realized this was the strongpoint of our project
After an incredibly helpful brainstorming session with Natalie, we landed on our ultimate idea
Over Winterim, ChatGPT really started to take off, and we paired the popularity of this breakthrough AI with what we noticed at Technigala: no one knew what kinds of questions they could ask the AI. We were so intrigued by what people came up with and how they came up with it, that we decided to pivot in the direction of learning to ask the right questions to AI, and how AI code generation can be used both practically and creatively.
Coming up with this idea was definitely the biggest challenge for us, but we finally had an idea that we all loved and one that used a lot of what we had before!
From then on, it was just about building out the website. As Tim said, "Get it done."
We started by adding a ton to the previously quite simple backend
Now, it wasn't just being used to make calls to the OpenAi API to get code from text prompts
Most of the new functionality was due to adding users to cater to the community aspect of our project, so we setup a database with MongoDB (thanks Prof. Palmer and CS 61!)
We added accounts, CRUD for users, and also CRUD for what we deemed 'projects', i.e. the code from a session using our IDE
The frontend had a LOT of work to be done with several new pages being added to the site for profiles and community exploring
We got to work on creating a ton of brand new designs while simultaneously setting up working versions of the site to get functionality down while designs were being finished
Something that threw a bit of a wrench in our plans was catering to code generation for webdev stuff instead of Python
We had to redesign, but we quickly got back on track with an IDE with THREE editors instead of one that displayed their output
This ended up being the best thing we ever did! It allowed for so much more creativity and interesting projects for both seasoned and beginner coders (THANK YOU TIM FOR THIS SUGGESTION) and made our site look FANTASTIC once we got the iFrames rendered on the explore page
Team Dynamic
Breaking up into designers, front end coders, and back end coders allowed for each member to leverage their specific skills and contribute to the product in a way that felt meaningful to them
Sometimes, this led to a bit of trouble...people would get too zoned in on their part of the project that they would forget to make sure things worked well with the bigger picture...but we were able to overcome such issues by coming together as a team to solve an issues that arose
Even though we all used our strengths, we all worked on at some points different parts of the project
Something we struggled with was how to create reliable and robust responses with the OpenAI API. This required significant testing and adjustments including updating the parameters of the model as adding specific context to our prompts. The creativity around AI generated responses meant our users sometimes received irrelavent or poorly generated code.
We realized in this end that this was ok. It is part of the fun of exploring AI code generation–seeing the really cool and amazing things it can do while also seeing the weird things the models return sometimes! And where it is still lacking and needs improvement...
Furthermore, due to the creativity of users and AI, during our user testing we would often have to deal with cases that we had not controlled for in our site construction. For example, we had to deal with infinite loop generations and how to control for these using loop protection and cleaned code, as well as image and link uses in the AI generated responses that didn't exist in the scope of our IDE and therefore wouldn't render.
We were also able to mitigate initial user frustration with the slowness of the model by tuning it better
Something that we loved: fellow CS students were really excited to see what they could do with our site, trying things such as SQL injections, algorithmic games, and external API calls.
Though it was frustrating, it was fun to see people find creative ways to break our site using AI code generation.
We found using zenhub easy and helpful. Being able to create, visualize, and assign tasks meant the entire team was more likely to be on the same page when it came deliverable for the week and during our Sunday debrief meetings.
Validation
getting 50 people signed up by technigala
81 users! Achieved :)
getting at least 15 posts that are not from us
123 total projects, well over 50% from users that are not us
having a 5 discussions posted about user generated products
196 comments posted...ended up getting some awesome / sometimes funny conversations going
Something we want in the future is more techincal discussions about the code in these comments
being strong enough to sustain a high capacity of users (maybe 100?)
yes, we implemented pagnation to avoid storing all projects in redux to be able to handle a high volume
get 3 clicks on a share button for sharing ConvoCode projects to other platforms
we don't have metrics on how many times this was clicked as we have no way currently to track it, but hope this is something people utilized to share the links to their projects
Getting at least 25 users to upvote and downvote some projects that are not their own
Our next goal is to work on marketing our product further and having it reach people beyond Dartmouth, focusing specifically on virality within the AI interest community
We plan on posting Convocode on twitter, Reddit and LinkedIn within the next week.
Dylan is going to make a LinkedIn post later this week to market the work and we will all repost it.
William is going to publish the medium article
We are all going to post it in relavant Reddit threads
If we were doing ConvoCode 3.0...
We would love to add the ability to see other people's profile pages. Right now, you can search by username to see all their projects but having the ability to see other's profile pages would make it more like a social media platform
A more complex sorting algorithm for the community page. We sort by likes now but it would be better to mix popularity with most recent
Adding some more smart IDE features like runtime analysis, error searching on StackOverflow (long live the widgets!)
Ability for more languages (python?) which would require some redesigning
Make profile page more interesting and social-media esque/aesthetic to differentiate from Community Page more
It would also be super cool to give users the ability to choose what OpenAI / model parameters they want to use themselves (or in the future support text to code models from other places)
It would be awesome to elimanate the Command History button altogether and just show the history / highlight AI generated code in real time
Final Takeaways
OpenAI and GPT are very current subjects on the minds of most students. In this way, we loved how cutting edge Convocode is and how we are engaging with the recent uptick in interest in AI and all that it can do for computer science and beyond. It made the project incredibly exciting to work on being so relavant to our lives.
We learned the valuable lesson that something 'cool' isn't what ultimately makes an idea succeed. It's not what's going to make a team come together and feel invested in the project, nor is it going to result in something long-lasting that people will come back to.
One of our biggest regrets was not pivoting in to this project sooner. Given that this was a two term course but our first course was used on a different idea and voice capability, we feel that we lost some valuable coding time. Had we had admitted defeat on our original idea sooner we would have had time for some of our 3.0 ideas. Luckily, ConvoCode 2.0 did utilize a lot of our IDE capabilites from 1.0.
We are genuinely upset that our official time working with each other on ConvoCode is over, but had simply the best two terms together.
Some of us were friends before, but we are all friends of life now.
Having a project this term that we really believed in and were excited about made us even more invested in seeing its success through.
We would disagree with each other on implementation sometimes but only in the best spirits and to get the most out of this project
Overall, what worked really well was that each team member was willing to teach other team members how to do something instead of just saying "Oh I'll just do it myself because that will be faster"
We struck a great balance between efficiency/getting things done and learning
We found that coding in pairs as well as an all team meeting together for a 'coding lock-in' as we called it on Sunday worked well. Being about to communicate and bounce ideas off of one another as well as get immediate feedback on PRs allowed for an efficient and productive team environment
One really exciting aspect of technigala and general user testing was discovering a market for our product with students new to code. While we designed ConvoCode with at least a base CS knowledge, we found that new CS students had almost just as much fun asking the AI prompts, using existing projects on the community page and their prompts as starting points to mitigate the issue for new coders of "What do I ask it..."
General sentiment by users was incredibly positive! We couldn't be more excited about the genuine interest that all different kinds of people (coders or not) showed in the project at Technigala and beyond. A fellow COSC 98 student on a different project telling us that ConvoCode was his favorite project made our term.
Final Project Report
Summary/Debrief
We began this term struggling with our old idea of ConvoCode...
From then on, it was just about building out the website. As Tim said, "Get it done."
Team Dynamic
Something we struggled with was how to create reliable and robust responses with the OpenAI API. This required significant testing and adjustments including updating the parameters of the model as adding specific context to our prompts. The creativity around AI generated responses meant our users sometimes received irrelavent or poorly generated code.
Something that we loved: fellow CS students were really excited to see what they could do with our site, trying things such as SQL injections, algorithmic games, and external API calls.
We found using zenhub easy and helpful. Being able to create, visualize, and assign tasks meant the entire team was more likely to be on the same page when it came deliverable for the week and during our Sunday debrief meetings.
Validation
Potential Next Steps
Final Takeaways
Gifs
Individual Post
Community Page
https://user-images.githubusercontent.com/66576635/224514241-f74401f3-7275-4fd1-ad07-f496e3a86f51.mov
Code Tagging
https://user-images.githubusercontent.com/66576635/224514280-0a3f4b6b-7585-402f-8fe6-e55881e4d5c2.mov
Exploring Community Page
The text was updated successfully, but these errors were encountered: