Skip to content

Commit

Permalink
Day 25 using adjacency dict
Browse files Browse the repository at this point in the history
  • Loading branch information
derailed-dash committed May 5, 2024
1 parent b1ea519 commit 9d28327
Showing 1 changed file with 112 additions and 4 deletions.
116 changes: 112 additions & 4 deletions src/AoC_2023/Dazbo's_Advent_of_Code_2023.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -10446,7 +10446,6 @@
"\n",
"# SETUP LOGGING\n",
"logger.setLevel(logging.DEBUG)\n",
"# td.setup_file_logging(logger, locations.output_dir)\n",
"\n",
"# Retrieve input and store in local file\n",
"try:\n",
Expand Down Expand Up @@ -10501,7 +10500,7 @@
"\n",
"We need to split this into two separate graphs. We're told that we can split our graph into two graphs by making three cuts. So _I assume that there is only one way to split our graph into two graphs, with three cuts._\n",
"\n",
"Splitting a graph into two separate graphs is a well known problem, and can be solved with the **max-flow cut-cut theorem.** Check out some links:\n",
"Splitting a graph into two separate graphs is a well known problem, and can be solved with the **max-flow min-cut theorem.** Check out some links:\n",
"\n",
"- [Max-flow min-cut theorem - Wikipedia](https://en.wikipedia.org/wiki/Max-flow_min-cut_theorem)\n",
"- [Max-flow min-cut algorithm - Brilliant.org](https://brilliant.org/wiki/max-flow-min-cut-algorithm)\n",
Expand Down Expand Up @@ -10542,7 +10541,7 @@
" ax.set_axis_off()\n",
" plt.show()\n",
" \n",
" left = next(iter(graph.nodes)) # pick an arbitrary node to be the source (for partitin 1)\n",
" left = next(iter(graph.nodes)) # pick an arbitrary node to be the source (for partition 1)\n",
" for right in graph.nodes: # iterate through remaining nodes as sinks (for partition 2)\n",
" if left != right:\n",
" cut_val, partitions = nx.minimum_cut(graph, left, right)\n",
Expand Down Expand Up @@ -10584,6 +10583,115 @@
"logger.info(f\"Part 1 soln={soln}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### From First Principles\n",
"\n",
"What if we want to solve this without using (or knowing about) an existing graph algorithm?\n",
"\n",
"We can use this approach instead:\n",
"\n",
"- Build an adjacency dictionary for all the nodes in the graph.\n",
"- Then use BFS outwards from each node in the graph, and count the number of times we see each specific edge.\n",
"- The three wires we need to disconnect will be the three edges we've seen most often. Why? Because these edges will be common to all graphs.\n",
"- Disconnect these three edges and the graph will be split into two separate graphs.\n",
"- BFS from one subgraph to determine its size.\n",
"- Subtract this size from the original graph size to determine the size of the other subgraph.\n",
"- Finally, return the product of these two sizes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def solve_first_principles(data):\n",
" graph = defaultdict(set)\n",
" \n",
" for line in data: # e.g. jqt: rhn xhk nvd\n",
" left, right = (part.strip() for part in line.split(\":\"))\n",
" right_parts = right.split()\n",
" \n",
" for r_part in right_parts:\n",
" graph[left].add(r_part)\n",
" graph[r_part].add(left)\n",
" \n",
" logger.debug(graph)\n",
" \n",
" edge_counts = defaultdict(int) # map of edges to number of times they were encountered\n",
" \n",
" for start in graph: # bfs using each node in the graph as start\n",
" queue = deque([start])\n",
" seen = set()\n",
" seen.add(start) \n",
" prev = {} # breadcrumb trail for backtracking\n",
" \n",
" while queue:\n",
" current = queue.popleft()\n",
" for adj in graph[current]:\n",
" if adj not in seen:\n",
" seen.add(adj)\n",
" queue.append(adj)\n",
" prev[adj] = current\n",
" \n",
" # Now let's backtrack to find all the edges that were traversed\n",
" # and increment their counts\n",
" for node in graph:\n",
" while node != start:\n",
" came_from = prev[node]\n",
" edge_counts[frozenset([came_from, node])] += 1 # use set so order of nodes does not matter\n",
" node = came_from\n",
" \n",
" # let's sort the edge_counts by frequency\n",
" sorted_edge_counts = sorted(edge_counts.items(), key=lambda x: x[1], reverse=True)\n",
" logger.debug(f\"{sorted_edge_counts=}\")\n",
" logger.debug(f\"Three most frequent edges:\\n{sorted_edge_counts[:3]}\")\n",
" \n",
" # Now let's remove these three edges from the graph\n",
" # This will divide our graph into two sub-graphs\n",
" for (left, right), _ in sorted_edge_counts[:3]:\n",
" graph[left].remove(right)\n",
" graph[right].remove(left)\n",
" \n",
" # Now let's find the number of components in the sub-graph\n",
" # We'll do this by doing a BFS from any node in the graph\n",
" start = next(iter(graph))\n",
" queue = deque([start])\n",
" seen = set()\n",
" seen.add(start) \n",
" \n",
" while queue:\n",
" current = queue.popleft()\n",
" for adj in graph[current]:\n",
" if adj not in seen:\n",
" seen.add(adj)\n",
" queue.append(adj)\n",
" \n",
" sub_graph_sizes = [len(seen), len(graph) - len(seen)]\n",
" logger.debug(f\"{sub_graph_sizes=}\")\n",
" return math.prod(sub_graph_sizes)\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"logger.setLevel(logging.DEBUG)\n",
"validate(solve_first_principles(sample_input.splitlines()), sample_answer) # test with sample data\n",
"logger.info(\"Tests passed!\")\n",
"\n",
"logger.setLevel(logging.INFO)\n",
"soln = solve_first_principles(input_data)\n",
"logger.info(f\"Part 1 soln={soln}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand Down Expand Up @@ -10740,7 +10848,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
"version": "3.11.5"
},
"toc-autonumbering": false,
"toc-showcode": false,
Expand Down

0 comments on commit 9d28327

Please sign in to comment.