Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 8 additions & 2 deletions 02_activities/assignments/DC_Cohort/Assignment2.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ The store wants to keep customer addresses. Propose two architectures for the CU
**HINT:** search type 1 vs type 2 slowly changing dimensions.

```
Your answer...
After googling what these two type 1 and type 2 mean (not related to diabetes), type 1 is essentially overwriting existing data and type 2 is tracking history. I imagin that to keep addresses we use type 2, in that in the table there are multiple entries by the same customer but with different addresses and at different points in time (presumably the customer moved). In the type 1 case, the architecture of the table would enable each customer to only have one customer_id label and address!
```

***
Expand Down Expand Up @@ -183,5 +183,11 @@ Consider, for example, concepts of labour, bias, LLM proliferation, moderating c


```
Your thoughts...
Prior to reading this article, I always roughly had an idea of what ImageNet was and the famous researcher from China that is Dr. Fei-Fei Li. However, I did not recognize the extent to which this involved thousands of hours of tagging and intensive-cheap labour work. I think there are 3 main ethical considerations to contemplate.

The first ethical consideration is consent and copyright. Currently, a large section of my doctoral research touches on GenAI in the contexts of music. One issue behind this is that while we can train neural networks and optical computer recognition (OCR) technology to optimize machine visioning so that can read sheet music, the issue remaining is that quite often these models do not have licenses or a legal consent to use such images as part of their training data. What if the images that were tagged were obtained through illegal means or not consented? Furthermore, do we really know where these images came from? While you could argue that all of these images were "free-for-all" given that they exist on the internet, scientists and researchers have to abide by more stringent regulations in this regard. One is not necessarily building a dataset for oneself, but for research and commercialization. I work for a company known as Swift Medical, where we have the largest wounds database in the world. However, a large majority of our datasets required patient consent and willingness to be imaged and have data collected in order to fine tune neural networks/machine visioning platforms for detecting and tracking dermatological conditions.

The second ethical consideration is in the conception of labour. As a former research assistant myself, I can sympathize with the $10/hour labelling work that Dr. Li's research assistant's had to go through. Such mundane work is something that I have suffered through before. However, I am not necessarily a proponent of outsourcing work to Amazon Turk in particular. I would much rather Prolific. This is because, as the article has stated, the workmanship in Amazon Turk can be considerably lower. Furthermore, since the economic incentive of Turk is much lower, the demand for a so-called "high quality" finish is less. The lack of adequate payment is something that could really skew the dataset.

Finally, a crucial ethical consideration lies in the notion of content moderation. In many ways, images--just like art, music, and other fine art forms--can be subjective. One person may label the image one way, while another may label it differently. Human labellers inevitably carry inherent biases that can be harmful, especially if those biases are inputted into their labelling practices. For example, the concept of "weak" and "nerdy" in a person with glasses is a harmful narrative that becomes only perpetuated if such a case is part of the dataset (dependant on weighting, tuning parameters etc.). In what ways can LLMs and models perpetuate harmful narratives because it was fed a specific flavour of data (which was initially subjective)? Furthermore, is anyone really moderating this asides from the occasional pushback?
```
160 changes: 142 additions & 18 deletions 02_activities/assignments/DC_Cohort/assignment2.sql
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,12 @@ HINT: keep the syntax the same, but edited the correct components with the strin
The `||` values concatenate the columns into strings.
Edit the appropriate columns -- you're making two edits -- and the NULL rows will be fixed.
All the other rows will remain the same.) */

-- note to self, coalesce looks for NULL automatically
SELECT
product_name ||
', '|| COALESCE (product_size,'')|| -- I am replacing product_size NULL with blank
' ('||COALESCE(product_qty_type,'unit') || ')' -- I am replacing NULL with "unit"
FROM product;


--Windowed Functions
Expand All @@ -32,17 +37,32 @@ each new market date for each customer, or select only the unique market dates p
(without purchase details) and number those visits.
HINT: One of these approaches uses ROW_NUMBER() and one uses DENSE_RANK(). */


SELECT
customer_id,
market_date,
ROW_NUMBER() OVER (PARTITION BY customer_id ORDER BY market_date ASC) AS visit_number -- I am essentially using partition to tell it when to start labelling rows again and order to tell it to use which column info to order it
FROM customer_purchases;

/* 2. Reverse the numbering of the query from a part so each customer’s most recent visit is labeled 1,
then write another query that uses this one as a subquery (or temp table) and filters the results to
only the customer’s most recent visit. */


SELECT * FROM -- select all from this nested table, but with "where" only pick visit 1
(
SELECT --subquery (like a "nested" table)
customer_id,
market_date,
ROW_NUMBER() OVER (PARTITION BY customer_id ORDER BY market_date DESC) AS visit_number
FROM customer_purchases
)
WHERE visit_number = 1;

/* 3. Using a COUNT() window function, include a value along with each row of the
customer_purchases table that indicates how many different times that customer has purchased that product_id. */

SELECT DISTINCT --I kept on running into errors with this code, turns out i just needed distinct, or else I get duplicate entries - distinct collapse entries!
customer_id,
product_id,
COUNT(*) OVER (PARTITION BY product_id, customer_id) AS times_purchased
FROM customer_purchases;


-- String manipulations
Expand All @@ -56,12 +76,25 @@ Remove any trailing or leading whitespaces. Don't just use a case statement for
| Habanero Peppers - Organic | Organic |

Hint: you might need to use INSTR(product_name,'-') to find the hyphens. INSTR will help split the column. */

--SELECT
-- COALESCE(NULLIF((SUBSTR(product_name, 1, INSTR(product_name,' -')),''),'NULL') AS product_name_shortened --this one broke my head a little bit... too many nests! and errors!
--FROM product
--here it is correct with TRIM and better formatting!!!
SELECT
COALESCE(
NULLIF(TRIM(SUBSTR(product_name, 1, INSTR(product_name,' -') - 1)), ''),'NULL'
) AS product_name_shortened
FROM product;


/* 2. Filter the query to show any product_size value that contain a number with REGEXP. */


SELECT
product_size,
COALESCE(
NULLIF(TRIM(SUBSTR(product_name, 1, INSTR(product_name,' -') - 1)), ''),'NULL'
) AS product_name_shortened
FROM product
WHERE REGEXP_LIKE(product_size,'[0-9]'); --Iused REGEXP_LIKE function here, because its the best one I think I could find to look for numerical digits!

-- UNION
/* 1. Using a UNION, write a query that displays the market dates with the highest and lowest total sales.
Expand All @@ -72,10 +105,41 @@ HINT: There are a possibly a few ways to do this query, but if you're struggling
"best day" and "worst day";
3) Query the second temp table twice, once for the best day, once for the worst day,
with a UNION binding them. */




--first query to get sale values grouped by dates
CREATE TEMP TABLE sales_by_date AS
SELECT
market_date,
SUM(sales) AS total_sale
FROM vendor_daily_sales
GROUP BY market_date --need group by! so we get sum for each date, not whole table!

--second query with temp to rank and finding best and worst!
CREATE TEMP TABLE sales_by_date_ranked3 AS
SELECT
market_date,
total_sale,
RANK() OVER (ORDER BY total_sale DESC) AS RANK_DESC
FROM sales_by_date

--now actually create the table with best and worst!
-- iended up having to use where to filter, because iwanted to include market date!
SELECT
market_date,
total_sale,
"Best Day" AS status
FROM sales_by_date_ranked3
WHERE total_sale = (SELECT max(total_sale) FROM sales_by_date_ranked3)

UNION

SELECT
market_date,
total_sale,
"Worst Day" AS status
FROM sales_by_date_ranked3
WHERE total_sale = (SELECT min(total_sale) FROM sales_by_date_ranked3)

--not gonna lie, this was hard !!
/* SECTION 3 */

-- Cross Join
Expand All @@ -88,27 +152,53 @@ Remember, CROSS JOIN will explode your table rows, so CROSS JOIN should likely b
Think a bit about the row counts: how many distinct vendors, product names are there (x)?
How many customers are there (y).
Before your final group by you should have the product of those two queries (x*y). */
--Forgive me for taking a more complicated route

--first I built a cartesian product as a temporary table cross joining each customer to possible vendor items and multiplying that by 5
CREATE TEMP TABLE cartesian_product2 AS
SELECT DISTINCT A.vendor_id, A.product_id, A.original_price,B.customer_id,v.vendor_name
FROM vendor_inventory AS A
JOIN vendor AS v
ON A.vendor_id = v.vendor_id
CROSS JOIN customer AS B

--Second, I selected from that cartesian temp table and GROUP BY specific products.
SELECT
c.vendor_name,
p.product_name,
SUM(c.original_price*5) AS sale_per_customer_product
FROM cartesian_product2 AS c
JOIN product AS p
ON c.product_id = p.product_id
GROUP BY
c.product_id

-- INSERT


--INSERT
/*1. Create a new table "product_units".
This table will contain only products where the `product_qty_type = 'unit'`.
It should use all of the columns from the product table, as well as a new column for the `CURRENT_TIMESTAMP`.
Name the timestamp column `snapshot_timestamp`. */



CREATE TABLE product_units2 (
product_name VARCHAR(100),
product_qty_type VARCHAR(100) DEFAULT 'unit',--I am presuming, you guys wnat every product qty type column to be in "units"
snapshot_timestamp TIME
);
/*2. Using `INSERT`, add a new row to the product_units table (with an updated timestamp).
This can be any product you desire (e.g. add another record for Apple Pie). */
INSERT INTO product_units2 (product_name, snapshot_timestamp)
VALUES('Apple Pie',CURRENT_TIME); --I used current_time, because you said updated time stamp!



-- DELETE
-- DELETE
/* 1. Delete the older record for the whatever product you added.

HINT: If you don't specify a WHERE clause, you are going to have a bad time.*/

DELETE FROM product_units2
WHERE product_name = 'Apple Pie'; --I used where, so I dont give myself a heart attack :)!



-- UPDATE
Expand All @@ -128,6 +218,40 @@ Finally, make sure you have a WHERE statement to update the right row,
you'll need to use product_units.product_id to refer to the correct row within the product_units table.
When you have all of these components, you can run the update statement. */

--adding that extra column to my new table!
ALTER TABLE product_units2
ADD current_quantity INT;

--creating a temp table with vendor name in it through join! --Actually. scratch this plan,, i would need to use join here... i shall do subquery
CREATE TEMP TABLE quantity_latest2 AS
SELECT
vi.vendor_id,
p.product_name,
vi.product_id,
vi.quantity,
vi.market_date AS latest_date
FROM vendor_inventory AS vi
JOIN product AS p
ON vi.product_id = p.product_id
JOIN (
SELECT vendor_id, product_id, MAX(market_date) AS latest_date
FROM vendor_inventory
GROUP BY vendor_id, product_id
) AS latest
ON vi.vendor_id = latest.vendor_id
AND vi.product_id = latest.product_id
AND vi.market_date = latest.latest_date

--New plan and udpate with correlated subquery, no join required!!!
UPDATE product_units2
SET product_name = ( -- this correlates my temp table with the updating table
SELECT ql.product_name
FROM quantity_latest2 AS ql
WHERE ql.product_id = product_units2.product_id
)
WHERE product_id IN (SELECT product_id FROM quantity_latest2); --this sets conditions to pick rows

--actually... im still confused.... im not sure if this works ???



Binary file not shown.